Board Room
Nov 12, 2022 09:00 AM - 11:45 AM(America/New_York)
20221112T0900 20221112T1145 America/New_York Consensus and Dissent in Science: New Perspectives

Scientific consensus plays a crucial role in public life. In the face of increasing science denialism, scientists are under pressure to present themselves as a united front to combat misinformation and conspiracy theories. However, the drive for consensus also has negative epistemic consequences, such as masking expert disagreement and obscuring value judgments. There exists widespread agreement among philosophers that dissent plays an important epistemic role in scientific communities. Disagreements among scientists are inevitable in areas of active research and dissent is crucial in facilitating collective inquiry. How should we understand the epistemic role of dissent and determine when it is normatively appropriate? Does scientific consensus have any intrinsic epistemic value? What consensus-generating methods are apt and in which circumstances? The aim of this symposium is to present new research on the social epistemology of consensus and dissent. The papers collected in this symposium address the question of how to balance the epistemic advantages and disadvantages of consensus and dissensus. Through a variety of different case studies, ranging from pandemic policy to medical imaging and climate science, the papers offer different perspectives on how scientists can better communicate disagreement when interfacing with policy makers and the public.

Board Room PSA 2022 office@philsci.org
69 attendees saved this session

Scientific consensus plays a crucial role in public life. In the face of increasing science denialism, scientists are under pressure to present themselves as a united front to combat misinformation and conspiracy theories. However, the drive for consensus also has negative epistemic consequences, such as masking expert disagreement and obscuring value judgments. There exists widespread agreement among philosophers that dissent plays an important epistemic role in scientific communities. Disagreements among scientists are inevitable in areas of active research and dissent is crucial in facilitating collective inquiry. How should we understand the epistemic role of dissent and determine when it is normatively appropriate? Does scientific consensus have any intrinsic epistemic value? What consensus-generating methods are apt and in which circumstances? The aim of this symposium is to present new research on the social epistemology of consensus and dissent. The papers collected in this symposium address the question of how to balance the epistemic advantages and disadvantages of consensus and dissensus. Through a variety of different case studies, ranging from pandemic policy to medical imaging and climate science, the papers offer different perspectives on how scientists can better communicate disagreement when interfacing with policy makers and the public.

Commentary from Miriam SolomonView Abstract
SymposiumFeminist Philosophy of Science 09:00 AM - 11:45 AM (America/New_York) 2022/11/12 14:00:00 UTC - 2022/11/12 16:45:00 UTC
The symposium session, Consensus and Dissent in Science: New Perspectives, will end with a commentary on the papers by Miriam Solomon. Solomon has extensively studied the social epistemology of consensus and dissent. For example, Solomon (2001) criticizes the view that consensus is an aim of, or a regulative ideal for scientific inquiry. According to her, the existence of scientific dissent is normal, and a distribution of different views in the scientific community that is proportional to each view’s relative empirical success is the desirable normative situation. In Solomon (2015), she appreciates the importance of consensus in medicine, and, more specifically, the institution of consensus conferences. Solomon will evaluate the papers in the symposium within the wider context of social epistemic critiques of consensus building in science. Solomon, M. (2001). Social Empiricism. MIT Press. Solomon, M. (2015). Making Medical Knowledge. Oxford University Press.
Presenters Miriam Solomon
Moderator, Commentator, Temple University
Expert Judgment in Climate ScienceView Abstract
SymposiumPhilosophy of Climate Science 09:00 AM - 11:45 AM (America/New_York) 2022/11/12 14:00:00 UTC - 2022/11/12 16:45:00 UTC
Consensus is often regarded as an important criterion for laypeople or decision-makers to arbitrate between the opinions of experts. Other criteria include tracking record and unbiasedness of experts, as well as validity of evidence and soundness of arguments. Overall, these criteria aim to ensure that expert judgment is grounded in objective arguments and is not a mere subjective belief or expression of interests from experts. In particular, consensus is supposed to guarantee a certain intersubjectivity. In this paper, we argue that the subjective aspects of expert judgment, such as intuitions and values, which consensus and the other criteria are supposed to counteract, actually bestow epistemic power upon those judgements. For that, we explore the role of expert judgment in climate science. We show that expert judgment can be found throughout the scientific process, in model creation and utilization, model evaluation, data interpretation, and ultimately ending with the quantification and communication of uncertainties to policy-makers. We argue that expert judgment is used for the purpose of supplementing models and managing uncertainty. First, as no model can perfectly represent the target, expert judgment is used as an alternative cognitive resource in order to provide climate projections and associated probabilities. Second, expert judgment is used as a means for quantifying epistemic uncertainty surrounding both general theories and specific scientific claims. This is shown through the IPCC’s use of confidence and likelihood metrics for evaluating uncertainty. We further highlight that the production of an expert judgment is more epistemically opaque than computer simulations, as this production is partly internal, mental and thereby non-accessible. How then to justify that expert judgment can still supplement models and manage uncertainty? A pessimistic view would answer that expert judgment is simply a last resort: facing high uncertainty, one has no other choice than appealing to expert judgment. An optimistic view would rather recognize that there is some quality in expert judgment that makes it a precious cognitive resource. We contend that this quality stands in its subjective aspects. First, we argue that the trustworthiness of an expert judgment bears down on the expert being exceptionally well-informed, a not interchangeable rational agent, due to their education and professional experience; with experience comes tacit knowledge, and thereby insight and to some extent intuition. Second, we argue that, while values are possible sources of scientific disagreement, if we were to remove their influence from the expert judgments, we would be left in the same position as we started, a wealth of uncertainty and no practical way to overcome the challenge. Experts would indeed be left as mere databases, where information would be an input stored for later recall. Furthermore, under specific circumstances we define, value differences within an elicited group of experts can provide the condition of independence for rational consensus, as aimed by the elicitation methods which score, combine and aggregate expert judgements into structured judgments in the IPCC reports.
Presenters Julie Jebeile
Universität Bern
MM
Mason Majszak
Universität Bern
Minority Reports: Registering Dissent in ScienceView Abstract
SymposiumScience policy 09:00 AM - 11:45 AM (America/New_York) 2022/11/12 14:00:00 UTC - 2022/11/12 16:45:00 UTC
Consensus reporting is valuable because it allows scientists to speak with one voice and offer the most robust scientific evidence when interfacing with policymakers. However, what should we do when consensus does not exist? In this paper, I argue that we should not always default to majority reporting or consensus building when a consensus does not exist. Majority reporting does not provide epistemically valuable information and may in fact further confuse the public, because majority reporting obscures underlying justifications and lines of evidence, which may in fact be in conflict or contested. Instead, when a consensus does not exist, I argue that minority reporting, in conjunction with majority reporting, may be a better way for scientists to give high quality information to public. Through a minority report, scientists will be able to register dissenting viewpoints and give policymakers a better understanding of how science works. For an instructive epistemic model of how minority reports may work, I turn to an analogy with the U.S. Supreme Court. The court issues majority opinions, which are legally binding, and dissenting opinions when there exists significant divergence in views. The dissenting opinion is epistemically valuable in several ways (Ginsburg 2010). The dissent can help the author of the majority opinion clarify and sharpen her own reasoning, therefore increasing the quality of reasoning of the court in general. By laying out a diverging set of legal reasoning, the justice allows future legal cases to be brought and worked on using these diverging reasonings (Sunstein 2014). Furthermore, justices may also write concurring opinions when they agree with the ruling but for different legal reasons. I argue that this epistemic model of the Supreme Court which allows for minority and concurring reports can be extended to science. As scientific societies and expert panels are increasingly being called to produce consensus or majority reports to guide policy, these groups need an epistemic mechanism to register dissent on issues where there exists no strong consensus. While the majority report should be taken with the most weight, minority reports can shed light on underlying reasonings and value judgments that would otherwise be hidden in a majority or consensus report. If our goal in asking scientists for their guidance is to receive high-quality information on which to make decisions, then we should allow for minority reporting as a mechanism to gain a deeper understanding of the state of the science. Finally, I address some objections. The most pressing of which is that minority reporting may be particularly sensitive to capture by elites or special interests that seek to undermine public action. I argue for mechanisms that can limit the capture of dissenting voices by outside interests. Ginsburg, R. B. (2010). The role of dissenting opinions. Minnesota Law Review, 95, 1. Sunstein, C. R. (2014). Unanimity and disagreement on the Supreme Court. Cornell Law Review, 100, 769.
Presenters
HD
Haixin Dang
University Of Nebraska Omaha
Algorithmically Manufactured Scientific ConsensusView Abstract
SymposiumComputer Simulation and Modeling 09:00 AM - 11:45 AM (America/New_York) 2022/11/12 14:00:00 UTC - 2022/11/12 16:45:00 UTC
Scientists have started to use algorithms to manufacture a consensus from divergent scientific judgments. One area in which this has been done is the interpretation of MRI images. This paper consists of a normative epistemic analysis of this new practice. It examines a case study from medical imaging, in which a consensus about the segmentation of the left ventricle on cardiac MRI images was algorithmically generated. Algorithms in this case performed a dual role. First, algorithms automatically delineated the left ventricle – alongside expert human delineators. Second, algorithms amalgamated the different human-generated and algorithm-generated delineations into a single segmentation, which constituted the consensus outcome. My paper analyses the strengths and weaknesses of the process used in this case study, and draws general lessons from it. I analyze the algorithms that were used in this case, their strengths and weaknesses, and argue that the amalgamation of different human and non-human judgments contributes to the robustness of the final consensus outcome. Yet in recent years, there has been a move away from relying on multiple algorithms for analyzing the same data in favour of sole reliance on machine learning algorithms. I argue that despite the superior performance of machine learning algorithms compared to other types of algorithms, the move toward sole reliance on them in cases such as this ultimately damages the robustness and validity of the final outcome reached. This is because machine-learning algorithms are prone to certain kinds of errors that other types of algorithms are not prone to (and vice-versa). A central apparent motivation for this project and others like it is anxiety regarding the existence of disagreements over the segmentation of the same image by different human experts. At the same time, the consensus-generating method in this case and other like it faces difficulties handling—in a epistemically satisfying way—cases in which the experts’ judgments significantly diverge from one another. I argue that this difficulty stems from a strive to always reach a consensus, which follows from an unjustified tacit assumption that there should be just one correct segmentation. I argue that different legitimate delineations of the same data may be possible in some cases due to different weighings of inductive risks or different contextually appropriate theoretical background assumptions. Consensus-generating algorithms should recognize this possibility and incorporate an option to trade off values against each other for the sake of reaching a contextually appropriate outcome.
Presenters Boaz Miller
Zefat Academic College
On Masks and Masking: Epistemic Injustice and Masking Disagreement in the COVID-19 PandemicView Abstract
SymposiumValues in Science 09:00 AM - 11:45 AM (America/New_York) 2022/11/12 14:00:00 UTC - 2022/11/12 16:45:00 UTC
We have previously argued that masking, censoring, or ignoring scientific dissent can be detrimental for several ethical and epistemic reasons, even when such dissent is considered to be normatively inappropriate (de Melo-Martín and Intemann 2018). Masking dissent can be inappropriately paternalistic, undermine trust in experts, and make effective policy debates less fruitful. Here we explore another concern. Focusing on the case of communication about scientific information during the COVID-19 pandemic, we examine the extent to which masking disagreements among experts can result in epistemic injustices against laypersons. In an emerging public health crisis, uncertainties are high and public policy action is urgently needed. In such a context, where both policymakers and members of the public are looking to scientific experts to provide guidance, there is a great temptation for experts to “speak with one voice,” so as to avoid confusion and allow individuals, governments, and organizations to make evidence-based decisions rapidly (Beatty 2006). Reasonable and policy-relevant disagreement were masked during the pandemic in two central ways. First, scientific information with respect to particular interventions was presented in ways that masked the role of value judgments, about which disagreements existed. Interventions were thus presented as following directly from the scientific evidence. For example, decisions about whether to lockdown countries, what degree of lockdown to implement, and for how long depend not only on scientific evidence about the severity of Covid-19 but on ethical, social, or political judgments about, among other things, the importance of human life and health, the significance of civil liberties, the relevance of the financial recovery, the distribution of risks, and the proper role of government. When the policies were presented as following directly from the science, the role of value judgements in reaching conclusions was obscured. This denied laypersons the opportunity to assess how alternative value judgments might have led to different conclusions. In other words, it denied them rational grounds for objecting to, or following policies that may depend on value judgments. Second, disagreements about empirical data in assessing the efficacy or safety of interventions were also masked. For example, concerns about the consequences that minimizing the risks to some populations could have for the public willingness to follow recommendations, led to an overemphasis on risks to children and with it, to school closures. In these cases, masking scientific disagreement about empirical claims can deny decisionmakers access to contextualizing information that can be helpful in assessing risks that could (or could not) be reasonably taken or imposed on others. We conclude by drawing some lessons for how scientists and public health officials might communicate more effectively in circumstances where there are significant uncertainties and urgent need for action. Beatty, J. (2006). Masking disagreement among experts. Episteme, 3(1-2), 52-67. de Melo-Martín, I. and Intemann, K. (2018) The Fight Against Doubt: How to Bridge the Gap Between Scientists and the Public. New York, Oxford University Press.
Presenters Kristen Intemann
Presenter, Montana State University
ID
Inmaculada De Melo-Martin
Weill Cornell Medicine--Cornell University
Presenter
,
Montana State University
Weill Cornell Medicine--Cornell University
Zefat Academic College
University of Nebraska Omaha
Universität Bern
+ 2 more speakers. View All
 Daisy Underhilll
Graduate Student
,
UC Davis
 Nick Byrd
Stevens Institute of Technology
Upcoming Sessions
1486 visits