Deep Learning Opacity in Scientific Discovery

This abstract has open access
Abstract
Philosophical concern with epistemological challenges presented by opacity in deep neural networks does not align with the recent boom in optimism for AI in science and recent scientific breakthroughs driven by AI methods. I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a failure to examine how AI is actually used in science. I argue, using cases from the scientific literature, that examination of the role played by deep learning as part of a wider process of discovery shows that epistemic opacity need not diminish AI’s capacity to lead scientists to significant and justifiable breakthroughs.
Abstract ID :
PSA2022489
Submission Type

Associated Sessions

speaker
,
University of Chicago

Abstracts With Same Type

Abstract ID
Abstract Title
Abstract Topic
Submission Type
Primary Author
PSA2022514
Philosophy of Biology - ecology
Contributed Papers
Dr. Katie Morrow
PSA2022405
Philosophy of Cognitive Science
Contributed Papers
Vincenzo Crupi
PSA2022481
Confirmation and Evidence
Contributed Papers
Dr. Matthew Joss
PSA2022440
Confirmation and Evidence
Contributed Papers
Mr. Adrià Segarra
PSA2022410
Explanation
Contributed Papers
Ms. Haomiao Yu
PSA2022504
Formal Epistemology
Contributed Papers
Dr. Veronica Vieland
PSA2022450
Decision Theory
Contributed Papers
Ms. Xin Hui Yong
PSA2022402
Formal Epistemology
Contributed Papers
Peter Lewis
128 visits