Abstract
High powered methods, the big data revolution, and the crisis of replication in medicine and social sciences have prompted new reflections and debates in both statistics and philosophy about the role of traditional statistical methodology in current science. Experts do not agree on how to improve reliability, and these disagreements reflect philosophical battles–old and new– about the nature of inductive-statistical evidence and the roles of probability in statistical inference. We consider three central questions: • How should we cope with the fact that data-driven processes, multiplicity and selection effects can invalidate a method’s control of error probabilities? • Can we use the same data to search non-experimental data for causal relationships and also to reliably test them? • Can a method’s error probabilities both control a method’s performance as well as give a relevant epistemological assessment of what can be learned from data? As reforms to methodology are being debated, constructed or (in some cases) abandoned, the time is ripe to bring the perspectives of philosophers of science (Glymour, Mayo, Mayo-Wilson) and statisticians (Berger, Thornton) to reflect on these questions.