Two Types of Explainability for Machine Learning Models

This abstract has open access
Abstract
This paper argues that there are two different types of causes that we can wish to understand when we talk about wanting machine learning models to be explainable. The first are causes in the features that a model uses to make its predictions. The second are causes in the world that have enabled those features to carry out the model’s predictive function. I argue that this difference should be seen as giving rise to two distinct types of explanation and explainability and show how the proposed distinction proves useful in a number of applications.
Submission ID :
PSA2022536
Submission Type

Associated Sessions

Graduate Student
,
University of California, San Diego

Abstracts With Same Type

Submission ID
Submission Title
Submission Topic
Submission Type
Primary Author
PSA2022514
Philosophy of Biology - ecology
Contributed Papers
Dr. Katie Morrow
PSA2022405
Philosophy of Cognitive Science
Contributed Papers
Vincenzo Crupi
PSA2022481
Confirmation and Evidence
Contributed Papers
Dr. Matthew Joss
PSA2022440
Confirmation and Evidence
Contributed Papers
Mr. Adrià Segarra
PSA2022410
Explanation
Contributed Papers
Ms. Haomiao Yu
PSA2022504
Formal Epistemology
Contributed Papers
Dr. Veronica Vieland
PSA2022450
Decision Theory
Contributed Papers
Ms. Xin Hui Yong
PSA2022402
Formal Epistemology
Contributed Papers
Peter Lewis
359 visits