Explainability - AI and Ethics
Online in Zoom
Online in Zoom

Speaker 1 Alex John London, PhD Professor of Ethics and Philosophy Director of the Center for Ethics and Policy at Carnegie Mellon University

Explainability Is Not the Solution to Structural Challenges to AI in Medicine

Explainability is often treated as a necessary condition for ethical applications of artificial intelligence (AI) in Medicine. In this brief talk I survey some of the structural challenges facing the development and deployment of effective AI systems in health care to illustrate some of the limitations to explainability in addressing these challenges. This talk builds on prior work (London 2019, 2022) to illustrate how ambitions for AI in health care likely require significant changes to key aspects of health systems.

Speaker 2 Melissa McCradden, PhD, MHSc Director of AI in Medicine

The Hospital for Sick Children On the Inextricability of Explainability from Ethics: Explainable AI does not Ethical AI Make

Explainability is embedded into a plethora of legal, professional, and regulatory guidelines as it is often presumed that an ethical use of AI will require explainable algorithms. There is considerable controversy, however, as to whether post hoc explanations are computationally reliable, their value for decision-making, and the relational implications of their use in shared decision-making. This talk will explore the literature across these domains and argue that while post hoc explainability may be a reasonable technical goal, it should not be offered status as a moral standard by which AI use is judged to be ‘ethical.’ Moderator Karandeep Singh, MD, MMSc Assistant Professor of Learning Health Sciences Assistant Professor of Internal Medicine University of Michigan

Department of Learning Health Sciences

Explainability - AI and Ethics

LHS Collaboratory

icon to add this event to your google calendarOctober 20, 2022
12:00 pm - 1:30 pm
Online in Zoom
Sponsored by: Department of Learning Health Sciences
Contact Information: LHSCollaboratory-info@umich.edu

More Information & Registration

Speaker 1 Alex John London, PhD Professor of Ethics and Philosophy Director of the Center for Ethics and Policy at Carnegie Mellon University

Explainability Is Not the Solution to Structural Challenges to AI in Medicine

Explainability is often treated as a necessary condition for ethical applications of artificial intelligence (AI) in Medicine. In this brief talk I survey some of the structural challenges facing the development and deployment of effective AI systems in health care to illustrate some of the limitations to explainability in addressing these challenges. This talk builds on prior work (London 2019, 2022) to illustrate how ambitions for AI in health care likely require significant changes to key aspects of health systems.

Speaker 2 Melissa McCradden, PhD, MHSc Director of AI in Medicine

The Hospital for Sick Children On the Inextricability of Explainability from Ethics: Explainable AI does not Ethical AI Make

Explainability is embedded into a plethora of legal, professional, and regulatory guidelines as it is often presumed that an ethical use of AI will require explainable algorithms. There is considerable controversy, however, as to whether post hoc explanations are computationally reliable, their value for decision-making, and the relational implications of their use in shared decision-making. This talk will explore the literature across these domains and argue that while post hoc explainability may be a reasonable technical goal, it should not be offered status as a moral standard by which AI use is judged to be ‘ethical.’ Moderator Karandeep Singh, MD, MMSc Assistant Professor of Learning Health Sciences Assistant Professor of Internal Medicine University of Michigan

Event Flyer for Explainability - AI and Ethics