Benedum
Nov 12, 2022 01:15 PM - 03:15 PM(America/New_York)
20221112T1315 20221112T1515 America/New_York Machine Learning Benedum PSA 2022 office@philsci.org
55 attendees saved this session
Two Types of Explainability for Machine Learning ModelsView Abstract
Contributed PapersMachine learning and AI 01:15 PM - 01:45 PM (America/New_York) 2022/11/12 18:15:00 UTC - 2022/11/12 18:45:00 UTC
This paper argues that there are two different types of causes that we can wish to understand when we talk about wanting machine learning models to be explainable. The first are causes in the features that a model uses to make its predictions. The second are causes in the world that have enabled those features to carry out the model’s predictive function. I argue that this difference should be seen as giving rise to two distinct types of explanation and explainability and show how the proposed distinction proves useful in a number of applications.
Presenters Faron Ray
Graduate Student, University Of California, San Diego
Machine-led Exploratory Experiment in AstrophysicsView Abstract
Contributed PapersMachine learning and AI 01:45 PM - 02:15 PM (America/New_York) 2022/11/12 18:45:00 UTC - 2022/11/12 19:15:00 UTC
The volume and variety of data in astrophysics creates a need for efficient heuristics to automate the discovery of novel phenomena. Moreover, data-driven practices suggest a role for machine-led exploration in conceptual development. I argue that philosophical accounts of exploratory experiments should be amended to include the characteristics of cases involving machine learning, such as the use of automation to vary experimental parameters and the prevalence of idealized and abstracted representations of data. I consider a case study that applies machine learning to develop a novel galaxy classification scheme from a dataset of ‘low-level’ but idealized observables.
Presenters
HC
Heather Champion
Western University
Automated Discoveries, Understanding, and Semantic OpacityView Abstract
Contributed PapersMachine learning and AI 02:15 PM - 02:45 PM (America/New_York) 2022/11/12 19:15:00 UTC - 2022/11/12 19:45:00 UTC
I draw attention to an under-theorized problem for the application of machine learning models in science, which I call semantic opacity. Semantic opacity occurs when the knowledge needed to translate the output of an unsupervised model into scientific concepts depends on theoretical assumptions about the same domain of inquiry into which the model purports to grant insight. Semantic opacity is especially likely to occur in exploratory contexts, wherein experimentation is not strongly guided by theory. I argue that techniques in explainable AI (XAI) that aim to make these models more interpretable are not well suited to address semantic opacity.
Presenters
PK
Phillip Kieval
PhD Student, University Of Cambridge
Deep Learning Opacity in Scientific DiscoveryView Abstract
Contributed PapersMachine learning and AI 02:45 PM - 03:15 PM (America/New_York) 2022/11/12 19:45:00 UTC - 2022/11/12 20:15:00 UTC
Philosophical concern with epistemological challenges presented by opacity in deep neural networks does not align with the recent boom in optimism for AI in science and recent scientific breakthroughs driven by AI methods. I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a failure to examine how AI is actually used in science. I argue, using cases from the scientific literature, that examination of the role played by deep learning as part of a wider process of discovery shows that epistemic opacity need not diminish AI’s capacity to lead scientists to significant and justifiable breakthroughs.
Presenters Eamon Duede
Speaker, University Of Chicago
Graduate Student
,
University of California, San Diego
Western University
PhD Student
,
University of Cambridge
speaker
,
University of Chicago
Assistant Professor
,
Northeastern University
No attendee has checked-in to this session!
Upcoming Sessions
680 visits