Sterlings 1
Nov 10, 2022 01:30 PM - 04:15 PM(America/New_York)
20221110T1330 20221110T1615 America/New_York Climate Sensitivity, Paleoclimate Data, & the End of Model Democracy

Equilibrium Climate Sensitivity (ECS) characterizes the response of Earth's temperature to a doubling of atmospheric CO2 and is one of the most important and most studied metrics in climate science. For decades, estimates of ECS have been stable around 1.5°C to 4.5°C. In the most recent coupled model intercomparison project (CMIP6), however, many state-of-the-art climate models calculated ECS to be "hotter" than the upper bound of the consensus range; if correct, this would mean even more dire consequences for our planet than previously anticipated. The surprising CMIP6 results quickly became one of the highest-profile issues in climate science and a focus of intensive research, as scientists tried to determine why the models produced these unexpected results and whether they were erroneous. Our symposium explores several key epistemological and methodological issues arising from this high-profile case: the handling of discordant results; the validation of paleoclimate data used in both climate model evaluation and estimating ECS; the interpretation of climate model projections; holism and underdetermination in complex simulation models; and the end of climate science's long-standing practice of "model democracy", in which each state-of-the-art model gets equal weight in assessments of future warming.

Sterlings 1 PSA 2022 office@philsci.org
40 attendees saved this session

Equilibrium Climate Sensitivity (ECS) characterizes the response of Earth's temperature to a doubling of atmospheric CO2 and is one of the most important and most studied metrics in climate science. For decades, estimates of ECS have been stable around 1.5°C to 4.5°C. In the most recent coupled model intercomparison project (CMIP6), however, many state-of-the-art climate models calculated ECS to be "hotter" than the upper bound of the consensus range; if correct, this would mean even more dire consequences for our planet than previously anticipated. The surprising CMIP6 results quickly became one of the highest-profile issues in climate science and a focus of intensive research, as scientists tried to determine why the models produced these unexpected results and whether they were erroneous. Our symposium explores several key epistemological and methodological issues arising from this high-profile case: the handling of discordant results; the validation of paleoclimate data used in both climate model evaluation and estimating ECS; the interpretation of climate model projections; holism and underdetermination in complex simulation models; and the end of climate science's long-standing practice of "model democracy", in which each state-of-the-art model gets equal weight in assessments of future warming.

The climate science community’s response to discordant resultsView Abstract
SymposiumPhilosophy of Climate Science 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
It is well known that the path to greater precision in physics is not smooth. Because differences in subsequent experiments often fall outside the nominal uncertainties of the prior art, science often has to deal with discordance that stimulates increased focus on what were presumed to be small effects. Examples include the history of measurements of ‘Big G’ (the gravitational constant) and the charge of the electron (Bailey, 2018). In climate science, numerous examples can also be found, ranging from the ‘global cooling’ inferred from new satellite measurements in the 1990s, estimates of the mass balance of Antarctica in the 2000s, and the increased spread of climate sensitivity in the latest CMIP6 model intercomparison. Resolutions for these discordant results are not predictable a priori - systematic issues can affect new measurements and old measurements alike, and comparisons may not be fully compatible. While resolutions are still pending though, the broader community may not have the luxury of simply waiting for the reasons to be discovered. I will discuss how and why the climate science community is dealing with the “climate sensitivity issue” in the meantime.
Presenters
GS
Gavin Schmidt
NASA GISS
Paleoclimate Proxy Data: Uncertainty, Validation, & PluralismView Abstract
SymposiumPhilosophy of Climate Science 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
Paleoclimate proxy data are playing an increasingly central role in contemporary climate science. First, proxy data about key paleoclimates in Earth’s history can be used to benchmark the performance of state-of-the-art climate models by providing crucial “out of sample” tests. Paleoclimates provide data about the response of the Earth to climate states and forcing scenarios that are very different from those provided by the limited historical (i.e., instrument) record (which has hitherto provided the basis for building, tuning, and testing current climate models). These tests, which have most recently been undertaken by the Paleoclimate Model Intercomparison Project 4 (PMIP4) in coordination with CMIP6, will be increasingly important for developing climate models that can reliably forecast a future where anthropogenic forcing has perturbed the Earth out of the climate state represented by the historical record (Kageyama et al. 2018). Second, paleoclimate proxy data can also be used more directly to provide an estimate for quantities such as equilibrium climate sensitivity (ECS). Although ECS used to be estimated on the basis of the values provided by climate models, since the fourth assessment report (AR4) both paleoclimate proxy data and (instrument) data from historical warming have provided additional observational constraints on ECS values. In the most recent AR6, which was published last year, model-based estimates of ECS from the CMIP6 models were for the first time excluded from the evidential base for estimating climate sensitivity. Instead, the current official estimate for ECS was derived only on the basis of the following three independent lines of evidence: process understanding about feedbacks, the historical climate record, and the paleoclimate record (Sherwood et al. 2020; IPCC AR6, Chapter 7). Given their increasing importance for climate research, paleoclimate proxy data are ripe for philosophical analysis. Despite their role as data for testing climate models and as observational evidence for a value of climate sensitivity, it must be emphasized that paleoclimate data are themselves a complex, model-laden data product, involving many layers of data processing, data conversion, and data correction (Bokulich 2020). Hence, there are many sources of uncertainty in paleoclimate data that arise along the path from local proxy measurements of traces left in geologic record to global paleoclimate reconstructions of Earth’s deep past. To realize their potential, questions about how to validate paleoclimate data must be confronted. In this talk I develop a multi-procedure framework for validating (or evaluating) proxy data, analogous to the frameworks used for model evaluation. I further argue that paleoclimate data must be evaluated as adequate or inadequate for particular purposes (Bokulich and Parker 2021). Finally, I highlight the importance of data pluralism in the form of multiple data ensembles derived from different possible ways of processing the data. Although developed in the context of paleoclimate proxy data, the data-validation framework I provide here can be generalized to apply to data evaluation in other scientific contexts.
Presenters Alisa Bokulich
Boston University
A possibilistic epistemology of climate modeling and its application to the cases of sea level rise and climate sensitivityView Abstract
Contributed Papers 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
It has been argued that possibilistic assessment of climate model output is preferable to probabilistic assessment (Stainforth et al. 2007; Betz 2010, 2015; Katzav 2014; Katzav et al. 2012 and 2021). I aim to articulate a variant of a possibilistic approach to such assessment. On my variant, the output of climate models should typically be assessed in light of two questions: is it fully epistemically possible? If the output is (fully) epistemically possible, how remote a possibility does it represent? Further, on my variant, if the output is judged to be epistemically possible, it should be taken to represent objective possibilities, specifically potentialities of the actual climate system. Having articulated my possibilistic approach, I apply it to two key issues in climate science, namely the potential contribution of marine ice cliff instability to sea level rise over the rest of the twenty-first century and climate sensitivity. Marine ice cliff instability (MICI) has been posited as a mechanism that might lead to substantially more sea level rise than had previously been projected (DeConto and Pollard 2016). I will suggest that the existing assessment of the contribution of MICI to future sea level rise illustrates the strengths of my possibilistic approach and weaknesses of probabilistic approaches to assessing the output of climate models. I will also argue that the most recent Intergovernmental Panel on Climate Change assessment of climate sensitivity, especially its reliance on a variety of evidence considerations to address the challenges of unexpectedly high climate sensitivity projections by state-of-the-art climate models, illustrate the strength of my possibilistic approach and the weakness of probabilistic approaches.
Presenters
JK
Joel Katzav
University Of Queensland
Fixing High-ECS Models: The Problem of Holism RevisitedView Abstract
SymposiumPhilosophy of Climate Science 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
Equilibrium Climate Sensitivity (ECS) is a key metric when trying to understand the past, present and future behavior of Earth’s climate. Several models used in the latest IPCC report’s Coupled Model Intercomparison Project 6 (CMIP6) have failed to yield an ECS value within the consensus range estimated by several previous climate models (IPCC AR6, Chapter 7). Trying to understand why these state-of-the-art models failed to give an appropriate ECS value is no easy task. Johannes Lenhard and Eric Winsberg (2010, 2011) have argued that complex simulation models such as climate models exhibit a kind of epistemological holism that make it extremely difficult—if not impossible—to tease apart the sources of error in a simulation and attribute them to particular modeling assumptions or components. As a result, they argue that modern, state-of-the-art climate models are “analytically impenetrable” (Lenhard & Winsberg, 2011, p. 115). They identify as a source of this impenetrability what they call “fuzzy modularity,” which arises due to the complex interactions between the modules that make up a climate model. The question remains whether a model’s analytical impenetrability undermines scientists' efforts to identify the cause of the high ECS values and fix these models through a piecemeal approach. Despite these worries about analytical impenetrability and holism, scientists use sensitivity tests which involve replacing individual parameterizations, schemes, or process representations one-by-one in a piecemeal fashion to assess their impact on a model output quantity, such as ECS. Through sensitivity tests, scientists concluded the high ECS values in many climate models were likely due to more realistic parameterizations of cloud feedback (Gettelman et. al., 2019, Zelinka et. al., 2020). This is surprising because the models used in CMIP6 have a better representation of the current climate but the increased realism in cloud parameterization yields an unrealistic result in ECS. How is it that more realistic models can get worse results? Also, if the modules of a model are inextricably linked, how can scientists use sensitivity tests to find what is wrong and fix the model? It might be that fixing cloud parameterization only works because of compensating factors elsewhere. For example, radiative forcing may be compensating for the model’s climate sensitivity (Kiehl, 2007). The recent failure of models to yield an appropriate ECS value presents us with an opportunity to revisit concepts such as holism, realism, and underdetermination (also called equifinality) in current climate models. In this talk, I focus on attempts to diagnose the source of the high ECS in some CMIP6 models, Community Earth System Model 2 in particular, using techniques such as sensitivity testing and feedback analysis. While these techniques can go a long way towards addressing holism, there are limits to their applicability, which I discuss. I conclude by drawing some broader lessons about the more subtle relations between holism, fuzzy modularity, and underdetermination in complex simulation models.
Presenters
LC
Leticia Castillo
Ph.D., Boston University
When is a model inadequate for a purpose?View Abstract
SymposiumPhilosophy of Climate Science 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
Equilibrium climate sensitivity is a measure of the sensitivity of earth’s near-surface temperature to increasing greenhouse gas concentrations. When numerous state-of-the-art climate models recently indicated values for climate sensitivity outside of a range that had been stable for decades, climate scientists faced a dilemma. On the one hand, these high-sensitivity models had excellent pedigrees, incorporated sophisticated representations of physical processes, and had been demonstrated to perform more than acceptably well across a range of performance metrics; their developers considered them at least as good as, or even a significant improvement upon, previous generations of models. The common practice of “model democracy” would suggest giving their results equal weight alongside those of other state-of-the-art models. On the other hand, doing so would generate estimates of climate sensitivity and future warming substantially different from – and more alarming than – estimates developed over decades of previous investigation. Faced with this situation, climate scientists sought to further evaluate the quality of the CMIP6 models. I will show how their efforts, and their subsequent decisions to downweight or exclude some models when estimating future warming, but not when estimating some other variables, illustrates an adequacy-for-purpose approach to model evaluation. I will also critically examine some of the particular evaluation strategies and tests employed, with the aim of extracting some general insights regarding the evaluation of model inadequacy.
Presenters
WP
Wendy Parker
Virginia Tech
Boston University
University of Queensland
Ph.D.
,
Boston University
Virginia Tech
Dr. Carlos Santana
University of Utah
No attendee has checked-in to this session!
Upcoming Sessions
814 visits