Benedum
Nov 10, 2022 01:30 PM - 04:15 PM(America/New_York)
20221110T1330 20221110T1615 America/New_York Race: Scientific Methodology and Social Impact

Many normative questions about race have been addressed by social and political philosophers. These philosophers use philosophical approaches quite distinct from those found in the philosophy of science. The merits of these approaches notwithstanding, the complexity of social phenomena involving race raises several methodological issues that philosophers of science are well-positioned to address. For instance, the causal status of race has wide-reaching implications for which interventions can be pursued to mitigate racial injustice. Drawing from methodological discussions about causal modeling, statistical testing, and machine learning, this symposium highlights how philosophers of science can contribute more substantially to these normative issues. Race-based policymaking, police discrimination, algorithmic fairness, and racial disparities in healthcare are the chief issues that are discussed.

Benedum PSA 2022 office@philsci.org
70 attendees saved this session

Many normative questions about race have been addressed by social and political philosophers. These philosophers use philosophical approaches quite distinct from those found in the philosophy of science. The merits of these approaches notwithstanding, the complexity of social phenomena involving race raises several methodological issues that philosophers of science are well-positioned to address. For instance, the causal status of race has wide-reaching implications for which interventions can be pursued to mitigate racial injustice. Drawing from methodological discussions about causal modeling, statistical testing, and machine learning, this symposium highlights how philosophers of science can contribute more substantially to these normative issues. Race-based policymaking, police discrimination, algorithmic fairness, and racial disparities in healthcare are the chief issues that are discussed.

Against Racial MonismView Abstract
Contributed Papers 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
Recent work in the metaphysics of race that’s focused on the nature and reality of race as understood in the dominant race talk of current American English speakers—hereafter US race talk—has produced three main categories of race theories. Biological anti-realists—like Appiah (1992), Blum (2002), and Glasgow (2009)—have argued that, in US race talk, race is an unreal biological entity. Biological realists—like Outlaw (1996), Levin (2002), Spencer (2014), and Hardimon (2017)—have argued that, in US race talk, race is a real biological entity. NonBiological realists—like Haslanger (2012), Taylor (2013), and Ásta (2017)—have argued that, in US race talk, race is a real non-biological entity. However, after decades of arguing, metaphysicians of race haven’t yet developed a US race theory that’s close to being empirically adequate (in van Fraassen’s sense). While it’s admittedly very difficult for any theory to obtain empirical adequacy, the extent of the empirical inadequacies among the US race theories so far proposed suggests that there’s a systematic error in our metaphysical theorizing about race. That error, I submit, is the metametaphysical presupposition that there’s a single essence of race to be found in US race talk, which is a presupposition I’ll call essence monism about race. Essence monism about race is one example of racial monism. In contrast, I’ll argue that there’s a plurality of essences for race in US race talk, which is a view I’ll call essence pluralism about race. After defending my argument and addressing objections, I’ll explore interesting implications of the view, such as a novel perspective on how to address unjust racial disparities in health.
Presenters
QS
Quayshawn Spencer
Associate Professor, University Of Pennsylvania
Co-Authors
DM
Daniel Malinsky
Columbia University
Bias BountyView Abstract
Contributed Papers 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
Notions of fair machine learning that seek to control various kinds of error across protected groups generally are cast as constrained optimization problems over a fixed model class. For all such problems, tradeoffs arise: asking for various kinds of technical fairness requires compromising on overall error, and adding more protected groups increases error rates across all groups. Our goal is to “break though” such accuracy-fairness tradeoffs, also known as Pareto frontiers. We develop a simple algorithmic framework that allows us to deploy models and then revise them dynamically when groups are discovered on which the error rate is suboptimal. Protected groups do not need to be specified ahead of time: At any point, if it is discovered that there is some group on which our current model is performing substantially worse than optimally, then there is a simple update operation that improves the error on that group without increasing either overall error, or the error on any previously identified group. We do not restrict the complexity of the groups that can be identified, and they can intersect in arbitrary ways. The key insight that allows us to break through the tradeoff barrier is to dynamically expand the model class as new high error groups are identified. The result is provably fast convergence to a model that cannot be distinguished from the Bayes optimal predictor — at least by the party tasked with finding high error groups. We explore two instantiations of this framework: as a “bias bug bounty” design in which external auditors are invited (and monetarily incentivized) to discover groups on which our current model’s error is suboptimal, and as an algorithmic paradigm in which the discovery of groups on which the error is suboptimal is posed as an optimization problem. In the bias bounty case, when we say that a model cannot be distinguished from Bayes optimal, we mean by any participant in the bounty program. We provide both theoretical analysis and experimental validation.
Presenters
MK
Michael Kearns
University Of Pennsylvania
AR
Aaron Roth
University Of Pennsylvania
Co-Authors
DM
Daniel Malinsky
Columbia University
The Causal Basis for Testing Police Discrimination with StatisticsView Abstract
Contributed Papers 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
Consider a study indicating that police performing traffic stops in Pittsburgh search minority drivers at a higher rate than non-minority drivers. This result would be insufficient for establishing discrimination against minorities. This is because it is compatible, e.g., with the hypothesis that the police make stops based on observing suspicious activities, and that minority drivers disproportionately engage in such activities. For this reason, legal and empirical studies of discrimination often employ benchmark tests. Such tests involve statistically conditioning on covariates that differentiate the relevant groups in order to determine what the disparity between the group stop-rates would be in the absence of discrimination. As Neil and Winship (2018) note, benchmark tests are fatally undermined by Simpson’s Paradox (Sprenger and Weinberger, 2021). An example of the paradox would be a case in which police stopped minorities and non-minorities at the same rate in Pittsburgh as a whole, but stopped minorities at a higher rate within every single district. Accordingly, statistical claims involving comparisons of relative rates across populations – including the rates invoked in benchmark tests – will not be robust to conditioning on additional covariates. Unfortunately, Neil and Winship’s non-causal discussion of the paradox is woefully inadequate. Presenting a better understanding of its proper interpretation is important not only because the paradox is widely discussed in the empirical discrimination literature, but also because it illuminates the role of causal assumptions in interpreting statistics relevant to discrimination. The first general lesson I will draw from my discussion of the paradox concerns the sense in which discrimination statistics provide evidence for claims about police discrimination. One might be tempted by the position that discovering that police stop non-minorities and minorities at the same rates would count as evidence against discrimination, and that subsequently learning that minorities are stopped at a higher rate within every district would count as countervailing evidence. In contrast, I argue that the statistics being cited provide no evidence for or against discrimination, absent additional substantive assumptions about the variables being modeled. Since Simpson’s paradox reveals that comparisons of relative rates across populations are not robust to conditioning on additional variables, non-statistical assumptions are required to draw any conclusions about discrimination, even tentative ones. The second lesson I draw concerns an underappreciated role of causal assumptions in empirical modeling. Causal models are often advertised as licensing inferences concerning experimental interventions. Additionally, such models can provide a framework for differentiating meaningful from non-meaningful statistical relationships. Given that statistics alone cannot provide evidence for discrimination absent additional substantive assumptions, a further framework is required for representing such assumptions in a general way. I will argue that causal models provide precisely such a framework.
Presenters
NW
Naftali Weinberger
Reviewer, Munich Center For Mathematical Philosophy
Co-Authors
DM
Daniel Malinsky
Columbia University
On Interpreting Causal Effects of RaceView Abstract
Contributed Papers 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
We approach the debate over “causal effects of race” from a social constructionist perspective. Our first main thesis is that on a broad range of social constructionist views about race, an individual’s race is manipulable, i.e., it is conceptually coherent to posit counterfactuals about a person’s race without risking essentialism or debunked biological thinking. Our second main thesis is that causal effects of race are indirectly relevant to policy. Estimating the causal effects of race may be a starting point for inquiry into the plural mechanisms of racism, i.e., the various pathways by which racial disparities arise. This may inform policies which intervene on the mechanisms themselves.
Presenters
DM
Daniel Malinsky
Columbia University
LB
Liam Kofi Bright
Presenter, London School Of Economics
Race, Causation, and UnderdeterminationView Abstract
Contributed PapersPhilosophy of Race 01:30 PM - 04:15 PM (America/New_York) 2022/11/10 18:30:00 UTC - 2022/11/10 21:15:00 UTC
Race is frequently treated as an explanatory variable in causal models throughout the social sciences. Yet, there is lively disagreement about the causal status of race. This disagreement arises from three claims that jointly form a paradox: (1) all causes are manipulable; (2) race is a cause; and (3) race is not manipulable. Non-manipulationists resolve this paradox by rejecting (1). On this view, “manipulationism” is too narrow a conception of causation, so we should expand our repertoire of causal concepts such that race, despite being non-manipulable, is nevertheless causal. Causal skeptics about race resolve this paradox by rejecting (2). On this view, race is not a causal variable, in no small part because of its non-manipulability. Finally, manipulationists reject (3), holding that race is causal precisely because it is manipulable. In this paper, we offer a novel position called causal agnosticism about race. Like racial causal skeptics and manipulationists, we hold fast to the claim that all causes are manipulable (1). However, whereas skeptics insist that (2) is false and manipulationists insist that (3) is true, we claim that the social sciences underdetermine the extent to which races are causes or manipulable. We argue for our agnostic position by appeal to the literature on the modeling of causal macrovariables. A causal macro-variable summarizes an underlying finer-structure of a set of microvariables. (For example, a gas’ temperature is a macrovariable with respect to its constituent particles.) If the social sciences provide adequate evidence to accept that race is either a cause or manipulable, then race is either a well-defined macro-variable or there is a “strong signal” for race, where a “strong signal” is a variable distinct from race that nevertheless tracks closely with race. However, no such macro-variables or signals exist in the social scientific models that appeal to race. Furthermore, even if there were a strong signal for race, it does not follow that race is a cause. Consequently, the social sciences fail to provide adequate evidence for the claims that race is a cause and that race is manipulable, i.e., the social sciences underdetermine both (2) and (3). Throughout our discussion, we compare it to the alternatives canvassed above. We conclude by tracing out causal agnosticism’s policy implications. In particular, we argue that any policy intervention suggested by a non-agnostic position about race’s causal status can be reinterpreted in a manner compatible with agnosticism. We conclude from this that the ontological status of race is of marginal policy relevance.
Presenters Kareem Khalifa
Co-Author, UCLA
AT
Alexander Tolbert
Presenter, University Of Pennsylvania
Co-Authors
DM
Daniel Malinsky
Columbia University
Co-Author
,
UCLA
Columbia University
Presenter
,
University of Pennsylvania
Associate Professor
,
University of Pennsylvania
Reviewer
,
Munich Center for Mathematical Philosophy
+ 3 more speakers. View All
Prof. Kareem Khalifa
Co-Author
,
UCLA
No attendee has checked-in to this session!
Upcoming Sessions
1043 visits