Skip to main content
openpaper

Ethical Frameworks for AI-Driven Decision Making in High-Stakes Domains

Master's Thesis · ~98 pages · English

49 verified citations
~25k words
Generated in 22.7 minutes
EnglishMaster'sIEEE98 pages

Abstract

This thesis develops a comparative analysis of ethical frameworks applicable to AI-driven decision-making systems deployed in high-stakes domains including healthcare, criminal justice, and autonomous vehicles. The research evaluates four principal frameworks—utilitarian, deontological, virtue ethics, and care ethics—against real-world AI deployment scenarios. Special attention is given to accountability gaps, algorithmic bias, and the challenge of encoding moral reasoning in automated systems. The thesis proposes a layered governance architecture combining technical fairness constraints, institutional oversight mechanisms, and dynamic regulatory adaptation to address the evolving ethical challenges of AI deployment.

1. Introduction

Artificial intelligence systems increasingly make or inform consequential decisions affecting human lives—from medical diagnosis to parole recommendations to autonomous vehicle path planning. These high-stakes applications demand rigorous ethical analysis beyond conventional software engineering paradigms.

This thesis examines how established moral philosophy frameworks translate to AI system design and governance, identifying both theoretical insights and practical limitations. The proliferation of AI ethics guidelines from governments, corporations, and civil society organizations reflects the urgency of this challenge.

2. Framework Comparison

The thesis evaluates four ethical frameworks:

Utilitarianism - Maximizing aggregate welfare through outcome optimization. Challenges: distributional effects, minority harm, quantification difficulties.

Deontological Ethics - Rule-based constraints prohibiting certain actions regardless of consequences. Challenges: rule conflicts, edge case handling, rigidity.

Virtue Ethics - Cultivating trustworthy AI character traits including honesty, fairness, and prudence. Challenges: operationalization, context dependence.

Care Ethics - Prioritizing relationships and contextual responsiveness. Challenges: scalability, consistency requirements of automated systems.

3. Governance Architecture

The proposed layered governance architecture comprises:

Layer 1: Technical Constraints - Fairness metrics (demographic parity, equalized odds), explainability requirements, uncertainty quantification.

Layer 2: Institutional Oversight - Algorithmic impact assessments, independent audit mechanisms, whistleblower protections.

Layer 3: Regulatory Adaptation - Iterative regulatory sandboxes, standards body coordination, international harmonization mechanisms.

The architecture acknowledges that no single ethical framework suffices and proposes dynamic calibration based on domain-specific stakes and affected populations.

References

  1. [1]A. Jobin, M. Ienca, and E. Vayena, "The global landscape of AI ethics guidelines," Nature Machine Intelligence, vol. 1, no. 9, pp. 389-399, 2019.
  2. [2]B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi, "The ethics of algorithms: Mapping the debate," Big Data & Society, vol. 3, no. 2, pp. 1-21, 2016.
  3. [3]N. Diakopoulos, "Accountability in algorithmic decision making," Communications of the ACM, vol. 59, no. 2, pp. 56-62, 2016.
  4. [4]IEEE Standards Association, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems, 1st ed. IEEE, 2019.

This is a sample excerpt. Full papers include complete chapters, verified citations, and downloadable formats.

Free to try · No credit card required · Free to start, 3 credits/day