Eschew the heuristic-epistemic dichotomy to characterise modelsView Abstract Contributed PapersScientific Models / Modeling03:45 PM - 04:15 PM (America/New_York) 2022/11/12 20:45:00 UTC - 2022/11/12 21:15:00 UTC
It has been standard in the philosophy of models to distinguish between their having epistemic value and ‘mere’ heuristic value. This dichotomy has divided philosophers of economics: sceptics deny the epistemic value of theoretical economic models; optimists argue how-possibly explanations offered by models have epistemic value. I argue that the dichotomy has been historically contingent and, importantly, vis-a-vis- theories. We no longer distinguish theories and models so neatly. I further suggest that the optimists' urge to defend the epistemic value of models has often led them to mischaracterise economic practice. I illustrate with a case.
How to measure effect sizes for rational decision-makingView Abstract Contributed PapersPhilosophy of Medicine04:15 PM - 04:45 PM (America/New_York) 2022/11/12 21:15:00 UTC - 2022/11/12 21:45:00 UTC
Absolute and relative outcome measures measure a treatment’s effect size, purporting to inform treatment choices. I argue that absolute measures are at least as good as, if not better than, relative ones for informing rational decisions across choice scenarios. Specifically, this dominance of absolute measures holds for choices between a treatment and a control group treatment from a trial and for ones between treatments tested in different trials. This distinction has hitherto been neglected, just like the role of absolute and baseline risks in decision-making that my analysis reveals. Recognizing both aspects advances the discussion on reporting outcome measures.
Presenters Ina Jantgen Speaker, University Of Cambridge
Multiscale Modeling in Neuroethology: The Significance of the MesoscaleView Abstract Contributed PapersScientific Models / Modeling04:45 PM - 05:15 PM (America/New_York) 2022/11/12 21:45:00 UTC - 2022/11/12 22:15:00 UTC
Recent accounts of multiscale modeling investigate ontic and epistemic constraints imposed by relations between component models at varying relative scales (macro, meso, micro). These accounts often focus especially on the role of the meso, or intermediate, relative scale in a multiscale model. We aid this effort by highlighting a novel role for mesoscale models: functioning as a focal point, and explanation, for disagreement between researchers who otherwise share theoretical commitments. We present a case study in multiscale modeling of insect behavior to illustrate, arguing that the cognitive map debate in neuroethology research is best understood as a mesoscale disagreement.
Robustness and Replication: Models, Experiments, and ConfirmationView Abstract Contributed PapersScientific Models / Modeling05:15 PM - 05:45 PM (America/New_York) 2022/11/12 22:15:00 UTC - 2022/11/12 22:45:00 UTC
Robustness analysis faces a confirmatory dilemma. Since all of the models in a robust set are idealized, and therefore false, the set provides no confirmation. However, if a model is de-idealized, there is no confirmatory role for robustness analysis. Against this dilemma, I draw an analogy between robustness analysis and experimental replication. Idealizations, though false, can play the role of controlled experimental conditions. Robustness, like replication, can be used to show that some means of control is not having an undue influence. I conclude by considering some concerns about this analogy regarding the ontological difference between models and experiments.