Explanation, Representation, and Idealization in Machine Learning

This abstract has open access
Abstract
Machine learning (ML) and Deep learning (DL) modeling applications in science are becoming increasingly common. Despite their growing pervasiveness in the sciences, the potential implications of these models for philosophy of science have just scratched the surface. So far interest has largely centered around explainable AI (XAI) challenges, especially concerning the epistemic consequences of DL model opacity, data bias and AI fairness among larger issues of value-laden algorithmic applications, and broader contrasts of DL with AIG expectations. Our symposium seeks to explore other areas of philosophy of science that lie at the intersection of scientific modeling and DL: explanation vs prediction, representation, and idealization. We ask: Do DL models represent their targets and, if so, how? Are there informative comparisons to be drawn between DL models and idealizations? What are the epistemic consequences of DL representations? Can XAI methods be understood as producing idealized models? What really is the tradeoff between explainability and predictability?
Abstract ID :
PSA202260
Submission Type
Auburn University
speaker
,
Eindhoven University of Technology

Abstracts With Same Type

Abstract ID
Abstract Title
Abstract Topic
Submission Type
Primary Author
PSA2022227
Philosophy of Climate Science
Symposium
Prof. Michael Weisberg
PSA2022211
Philosophy of Physics - space and time
Symposium
Helen Meskhidze
PSA2022165
Philosophy of Physics - general / other
Symposium
Prof. Jill North
PSA2022218
Philosophy of Social Science
Symposium
Dr. Mikio Akagi
PSA2022263
Values in Science
Symposium
Dr. Kevin Elliott
PSA202234
Philosophy of Biology - general / other
Symposium
Mr. Charles Beasley
PSA20226
Philosophy of Psychology
Symposium
Ms. Sophia Crüwell
PSA2022216
Measurement
Symposium
Zee Perry
114 visits