Bias Bounty

This abstract has open access
Abstract
Notions of fair machine learning that seek to control various kinds of error across protected groups generally are cast as constrained optimization problems over a fixed model class. For all such problems, tradeoffs arise: asking for various kinds of technical fairness requires compromising on overall error, and adding more protected groups increases error rates across all groups. Our goal is to “break though” such accuracy-fairness tradeoffs, also known as Pareto frontiers. We develop a simple algorithmic framework that allows us to deploy models and then revise them dynamically when groups are discovered on which the error rate is suboptimal. Protected groups do not need to be specified ahead of time: At any point, if it is discovered that there is some group on which our current model is performing substantially worse than optimally, then there is a simple update operation that improves the error on that group without increasing either overall error, or the error on any previously identified group. We do not restrict the complexity of the groups that can be identified, and they can intersect in arbitrary ways. The key insight that allows us to break through the tradeoff barrier is to dynamically expand the model class as new high error groups are identified. The result is provably fast convergence to a model that cannot be distinguished from the Bayes optimal predictor — at least by the party tasked with finding high error groups. We explore two instantiations of this framework: as a “bias bug bounty” design in which external auditors are invited (and monetarily incentivized) to discover groups on which our current model’s error is suboptimal, and as an algorithmic paradigm in which the discovery of groups on which the error is suboptimal is posed as an optimization problem. In the bias bounty case, when we say that a model cannot be distinguished from Bayes optimal, we mean by any participant in the bounty program. We provide both theoretical analysis and experimental validation.
Abstract ID :
PSA2022754
Submission Type
Topic 1
University of Pennsylvania
University of Pennsylvania
Columbia University

Abstracts With Same Type

Abstract ID
Abstract Title
Abstract Topic
Submission Type
Primary Author
PSA2022514
Philosophy of Biology - ecology
Contributed Papers
Dr. Katie Morrow
PSA2022405
Philosophy of Cognitive Science
Contributed Papers
Vincenzo Crupi
PSA2022481
Confirmation and Evidence
Contributed Papers
Dr. Matthew Joss
PSA2022440
Confirmation and Evidence
Contributed Papers
Mr. Adrià Segarra
PSA2022410
Explanation
Contributed Papers
Ms. Haomiao Yu
PSA2022504
Formal Epistemology
Contributed Papers
Dr. Veronica Vieland
PSA2022450
Decision Theory
Contributed Papers
Ms. Xin Hui Yong
PSA2022402
Formal Epistemology
Contributed Papers
Peter Lewis
131 visits