Abstract
Machine learning (ML) and Deep learning (DL) modeling applications in science are becoming increasingly common. Despite their growing pervasiveness in the sciences, the potential implications of these models for philosophy of science have just scratched the surface. So far interest has largely centered around explainable AI (XAI) challenges, especially concerning the epistemic consequences of DL model opacity, data bias and AI fairness among larger issues of value-laden algorithmic applications, and broader contrasts of DL with AIG expectations. Our symposium seeks to explore other areas of philosophy of science that lie at the intersection of scientific modeling and DL: explanation vs prediction, representation, and idealization. We ask: Do DL models represent their targets and, if so, how? Are there informative comparisons to be drawn between DL models and idealizations? What are the epistemic consequences of DL representations? Can XAI methods be understood as producing idealized models? What really is the tradeoff between explainability and predictability?