19612

Fair And Reliable Machine Learning For High-Stakes Applications:approaches Using Information Theory

APA

(2022). Fair And Reliable Machine Learning For High-Stakes Applications:approaches Using Information Theory. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/fair-and-reliable-machine-learning-high-stakes-applicationsapproaches-using-information-theory

MLA

Fair And Reliable Machine Learning For High-Stakes Applications:approaches Using Information Theory. The Simons Institute for the Theory of Computing, Feb. 11, 2022, https://simons.berkeley.edu/talks/fair-and-reliable-machine-learning-high-stakes-applicationsapproaches-using-information-theory

BibTex

          @misc{ scivideos_19612,
            doi = {},
            url = {https://simons.berkeley.edu/talks/fair-and-reliable-machine-learning-high-stakes-applicationsapproaches-using-information-theory},
            author = {},
            keywords = {},
            language = {en},
            title = {Fair And Reliable Machine Learning For High-Stakes Applications:approaches Using Information Theory},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2022},
            month = {feb},
            note = {19612 see, \url{https://scivideos.org/index.php/Simons-Institute/19612}}
          }
          
Sanghamitra Dutta (JP Morgan)
Talk number19612
Source RepositorySimons Institute

Abstract

How do we make machine learning (ML) algorithms fair and reliable? This is particularly important today as ML enters high-stakes applications such as hiring and education, often adversely affecting people's lives with respect to gender, race, etc., and also violating anti-discrimination laws. When it comes to resolving legal disputes or even informing policies and interventions, only identifying bias/disparity in a model's decision is insufficient. We really need to dig deeper into how it arose. E.g., disparities in hiring that can be explained by an occupational necessity (code-writing skills for software engineering) may be exempt by law, but the disparity arising due to an aptitude test may not be (Ref: Griggs v. Duke Power `71). This leads us to a question that bridges the fields of fairness, explainability, and law: How can we identify and explain the sources of disparity in ML models, e.g., did the disparity entirely arise due to the critical occupational necessities? In this talk, I propose a systematic measure of "non-exempt disparity," i.e., the bias which cannot be explained by the occupational necessities. To arrive at a measure for the non-exempt disparity, I adopt a rigorous axiomatic approach that brings together concepts in information theory (in particular, an emerging body of work called Partial Information Decomposition) with causality.