Skip to Content

Search: {{$root.lsaSearchQuery.q}}, Page {{$root.page}}

Statistics Department Seminar Series: Emily Diana, PhD Candidate, Department of Statistics and Data Science, Wharton School, University of Pennsylvania

"Addressing Algorithmic Bias and Disclosiveness: Minimax Group Fairness and Multiaccurate Proxies for Redacted Features"
Tuesday, November 29, 2022
4:00-5:00 PM
340 West Hall Map
Abstract: While data science enables rapid societal advancement, deferring decisions to machines does not automatically avoid egregious equity or privacy violations. Without safeguards in the scientific process --- from data collection to algorithm design to model deployment --- machine learning models can easily inherit or amplify existing biases and vulnerabilities present in society. My research focuses on explicitly encoding algorithms with ethical norms and constructing frameworks ensuring that statistics and machine learning methods are deployed in a socially responsible manner. In particular, I develop theoretically rigorous and empirically verified algorithms to mitigate automated bias and protect individual privacy.

I will highlight this work through two main contributions. In the first, I discuss a new oracle-efficient and convergent algorithm to provably achieve minimax group fairness -- fairness measured by worst-case outcomes across groups -- in general settings. In the second, I illustrate a framework for producing a sensitive attribute proxy that allows one to train a fair model even when the original sensitive features are redacted or unavailable.

Full text versions of the two papers can be found at https://dl.acm.org/doi/10.1145/3461702.3462523 (“Minimax Group Fairness: Algorithms and Experiments”) and
https://dl.acm.org/doi/10.1145/3531146.3533180 (“Multiaccurate Proxies for Downstream Fairness”).
Building: West Hall
Website:
Event Type: Workshop / Seminar
Tags: seminar
Source: Happening @ Michigan from Department of Statistics, Department of Statistics Seminar Series