Automated Discoveries, Understanding, and Semantic Opacity

This abstract has open access
Abstract
I draw attention to an under-theorized problem for the application of machine learning models in science, which I call semantic opacity. Semantic opacity occurs when the knowledge needed to translate the output of an unsupervised model into scientific concepts depends on theoretical assumptions about the same domain of inquiry into which the model purports to grant insight. Semantic opacity is especially likely to occur in exploratory contexts, wherein experimentation is not strongly guided by theory. I argue that techniques in explainable AI (XAI) that aim to make these models more interpretable are not well suited to address semantic opacity.
Abstract ID :
PSA2022325
Submission Type

Associated Sessions

PhD Student
,
University of Cambridge

Abstracts With Same Type

Abstract ID
Abstract Title
Abstract Topic
Submission Type
Primary Author
PSA2022514
Philosophy of Biology - ecology
Contributed Papers
Dr. Katie Morrow
PSA2022405
Philosophy of Cognitive Science
Contributed Papers
Vincenzo Crupi
PSA2022481
Confirmation and Evidence
Contributed Papers
Dr. Matthew Joss
PSA2022440
Confirmation and Evidence
Contributed Papers
Mr. Adrià Segarra
PSA2022410
Explanation
Contributed Papers
Ms. Haomiao Yu
PSA2022504
Formal Epistemology
Contributed Papers
Dr. Veronica Vieland
PSA2022450
Decision Theory
Contributed Papers
Ms. Xin Hui Yong
PSA2022402
Formal Epistemology
Contributed Papers
Peter Lewis
157 visits