Abstract
Philosophical concern with epistemological challenges presented by opacity in deep neural networks does not align with the recent boom in optimism for AI in science and recent scientific breakthroughs driven by AI methods. I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a failure to examine how AI is actually used in science. I argue, using cases from the scientific literature, that examination of the role played by deep learning as part of a wider process of discovery shows that epistemic opacity need not diminish AI’s capacity to lead scientists to significant and justifiable breakthroughs.