Shannon Vallor: Thinking Outside the Box: AI & the Shrinking Space of Moral Reasons

Abstract

Recent advances in machine learning have generated a new set of ethical problems regarding algorithmic opacity. Increasingly sophisticated yet opaque algorithms constrain and shape what we read, watch and hear online, who we are invited to meet or date, what medical treatments we are advised to undergo, who will hire us, how the justice system will treat us, and where we will be allowed to live. The lack of transparency in such processes raises profound ethical questions about justice, power, inequality, bias, freedom and democratic values in modern computing. Here I focus on a less commonly discussed concern: the potential for opaque algorithmic decision systems to lead to a contraction of what moral philosophers have called the space of moral reasons, a concept that underpins personal and public practices of moral reflection, moral responsibility, moral imagination, moral justification and moral appeal. Using examples from algorithmic decision systems used in jurisprudence, human resources, and law enforcement, I show how contractions of the space of moral reasons can result from decision practices mediated by such systems. I conclude with some reflections on how more ethically-informed design and use of algorithmic decision systems might help us to hold open, or even enlarge, the space of moral reasons in personal and public life.

Readings

  1. Heinrichs, B., Knell, S. (2021). ‘Aliens in the Space of Reasons? On the Interaction Between Humans and Artificial Intelligent Agents.’ Philosophy and Technology 34, 1569–1580. (PDF
  2. Isaac Asimov’s ‘Franchise’ short story (PDF)

 

 

Publisert 27. okt. 2022 21:04 - Sist endret 7. nov. 2022 13:36