Comparing Theories with the Ising Model of Explanatory Coherence: Methodological Advances and Theoretical Considerations

This submission has open access
Submission Summary
As Lewin (1943) already noted, “there is nothing as practical as a good theory”. However, how do we determine which theories are good and which are bad? It is hard to improve theory quality without a tool to assess it in practice. In psychology, most subfields are characterized by weak theories or a complete lack of theories. Even though problems of bad theory have been discussed with clockwork regularity, little progress has been made so far (e.g., Borsboom et al., 2021; Gigerenzer, 1991; Meehl, 1978). A potential reason is that the discipline lacks the tools to assess the quality of theories systematically. Therefore, we (Maier et al., 2021) proposed a computational model for theory evaluation. Specifically, we implement Thagard’s (1989) theory of explanatory coherence (TEC) in the Ising model. The Ising model, originally developed in statistical mechanics to describe the polarization of ferromagnetic materials (Ising, 1925), is a network model that has found broad application in psychological research. We showed that a) hypotheses provided by a scientific theory and phenomena explained by theories can be expressed by the nodes of the Ising model; b) empirical evidence for (against) the phenomena can be expressed by positive (negative) threshold on the phenomena; and c) explanatory and contradictory relations between hypotheses and phenomena can be expressed by positive and negative edges. The Ising Model of Explanatory Coherence (IMEC) incorporates the TEC principles of symmetry, explanation, data, priority, contradiction, and acceptability. Unlike previous implementations of TEC, IMEC allows researchers to evaluate individual theories and is available in an R package. Maier et al. (2021) showed that this simple computational meta-theory could successfully reproduce a variety of examples from the history of science. However, there is room for extension. In this talk, I will briefly introduce IMEC and demonstrate how it integrates considerations of explanatory breadth, refutation, simplicity, and downplaying potentially irrelevant evidence with respect to the hypotheses of the theory and other phenomena. In addition, I will demonstrate how to think through hypothetical scenarios and identify critical experiments using examples of theories in psychological science. Further, I will extend the methodology employed in Maier et al. (2021) by adding sensitivity analyses to IMEC. For instance, by examining the sensitivity of theory evaluations to variations of the edge weights between hypotheses and phenomena, it is possible to improve the robustness of applied theory comparison. However, considerations about the range of possible values under which sensitivity needs to be assessed are fundamentally intertwined with fundamental questions in the philosophy of science, such as the following: How can we quantify (the strength of) evidence (for a phenomenon)? To what extent is a theory supported (refuted) by making a correct (wrong) prediction or explanation? How can we determine the number of elemental propositions that a theory consists of? I hope this talk will spark a debate around these considerations and later allow me to incorporate them in the proposed sensitivity analysis.
Submission ID :
Submission Type
Submission Topic
PhD student
University College London

Similar Abstracts by Type

Submission ID
Submission Title
Submission Topic
Submission Type
Primary Author
Philosophy of Climate Science
Prof. Michael Weisberg
Philosophy of Physics - space and time
Helen Meskhidze
Philosophy of Physics - general / other
Prof. Jill North
Philosophy of Social Science
Dr. Mikio Akagi
Values in Science
Dr. Kevin Elliott
Philosophy of Biology - general / other
Mr. Charles Beasley
Philosophy of Psychology
Ms. Sophia Crüwell
Zee Perry
69 visits