Format results
What Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata
Lydia Liu (Cornell University)Predictive Modeling in Healthcare – Special Considerations
Noa Dagan (Clalit Health Services)Introducing Perimeter's Strategic EDI Plan
-
Robert Myers Perimeter Institute for Theoretical Physics
-
Bringing Order to Chaos: Navigating the Disagreement Problem in Explainable ML
Hima Lakkaraju (Harvard University)Newton’s Cradle Spectra
Barbara Soda Perimeter Institute for Theoretical Physics
Algorithmic Challenges in Ensuring Fairness at the Time of Decision
Swati Gupta (Georgia Institute of Technology)QFT2 - Quantum Electrodynamics - Afternoon Lecture
Cliff Burgess McMaster University
Supply-Side Equilibria in Recommender Systems
Jacob Steinhardt (UC Berkeley)Digital recommender systems such as Spotify and Netflix affect not only consumer behavior but also producer incentives: producers seek to supply content that will be recommended by the system. But what content will be produced? To understand this, we model users and content as D-dimensional vectors, and assume the system recommends the content that has the highest dot product with each user. In contrast to traditional economic models, here the producer decision space is high-dimensional and the user base is heterogeneous. This gives rise to new qualitative phenomena at equilibrium: the formation of genres,and the possibility of positive profit at equilibrium. We characterize these phenomena in terms of the geometry of the users and the structure of producer costs. At a conceptual level, our work serves as a starting point to investigate how recommender systems shape supply-side competition between producers. Joint work with Meena Jagadeesan and Nikhil GargWhat Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata
Lydia Liu (Cornell University)From education to lending, consequential decisions in society increasingly rely on data-driven algorithms. Yet the long-term impact of algorithmic decision making is largely ill-understood, and there exist serious challenges to ensuring equitable benefits, in theory and practice. While the subject of algorithmic fairness has received much attention, algorithmic fairness criteria have significant limitations as tools for promoting equitable benefits. In this talk, we review various fairness desiderata in machine learning and when they may be in conflict. We then introduce the notion of delayed impact---the welfare impact of decision-making algorithms on populations after decision outcomes are observed, motivated, for example, by the change in average credit scores after a new loan approval algorithm is applied. We demonstrate that several statistical criteria for fair machine learning, if applied as a constraint to decision-making, can result in harm to the welfare of a disadvantaged population. We end by considering future directions for fairness in machine learning that evince a holistic and interdisciplinary approach.Predictive Modeling in Healthcare – Special Considerations
Noa Dagan (Clalit Health Services)Prediction models in healthcare are being utilized for many tasks. However, the use of these models for medical decision-making warrants special considerations that are less critical when prediction models are used in other domains. Two of these considerations, which we will discuss in the talk, are fairness and explainability. We will discuss these considerations from the viewpoint of a large healthcare organization that uses prediction models ubiquity on a daily basis. We will also describe how academic collaborations can expand our toolbox for handling these issues in practice.Introducing Perimeter's Strategic EDI Plan
-
Robert Myers Perimeter Institute for Theoretical Physics
Over the last decade, there have been many Perimeter efforts in the realm of EDI, and they have unquestionably enhanced the Institute’s culture. Paradoxically, some of these efforts have illuminated areas where we can do more, and there are still others to be addressed.
In Perimeter’s short life, we’ve built a unique institution, with a culture characterized by intellectual fearlessness and excellence. Yet we can do even better. Our culture is connected to our research. We’re here to make breakthroughs in our understanding of our universe – and breakthroughs are made by thinking in new ways. We can’t afford to leave any great thinkers, or any great ideas, behind.
In 2020, we embarked on a project to develop a coherent, concrete strategic plan to guide Perimeter’s efforts in EDI, in partnership with experts at Shift Health and the Laurier Centre for Women in Science. All members of the Perimeter community have been consulted to ensure that the final strategy is reflective of our whole community.
Our actions to date are a step in an intentional and comprehensive effort to make Perimeter an institute where everyone can thrive and find a sense of belonging.
Zoom link: https://pitp.zoom.us/j/93399374837?pwd=QlBTSnluRk84L2x0eE0zYXlGQ0JFZz09
-
Bringing Order to Chaos: Navigating the Disagreement Problem in Explainable ML
Hima Lakkaraju (Harvard University)As various post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to develop a deeper understanding of if and when the explanations output by these methods disagree with each other, why these disagreements occur, and how to address these disagreements in a rigorous fashion. However, there is little to no research that provides answers to these critical questions. In this talk, I will present some of our recent research which addresses the aforementioned questions. More specifically, I will discuss i) a novel quantitative framework to formalize the disagreement between state-of-the-art feature attribution based explanation methods (e.g., LIME, SHAP, Gradient based methods). I will also touch upon on how this framework was constructed by leveraging inputs from interviews and user studies with data scientists who utilize explanation methods in their day-to-day work; ii) an online user study to understand how data scientists resolve disagreements in explanations output by the aforementioned methods; iii) a novel function approximation framework to explain why explanation methods often disagree with each other. I will demonstrate that all the key feature attribution based explanation methods are essentially performing local function approximations albeit, with different loss functions and notions of neighborhood. (iv) a set of guiding principles on how to choose explanation methods and resulting explanations when they disagree in real-world settings. I will conclude this talk by presenting a brief overview of an open source framework that we recently developed called Open-XAI which enables researchers and practitioners to seamlessly evaluate and benchmark both existing and new explanation methods based on various characteristics such as faithfulness, stability, and fairness.Pipeline Interventions
Juba Ziani (Georgia Tech)We introduce the pipeline intervention problem, defined by a layered directed acyclic graph and a set of stochastic matrices governing transitions between successive layers. The graph is a stylized model for how people from different populations are presented opportunities, eventually leading to some reward. In our model, individuals are born into an initial position (i.e. some node in the first layer of the graph) according to a fixed probability distribution, and then stochastically progress through the graph according to the transition matrices, until they reach a node in the final layer of the graph; each node in the final layer has a reward associated with it. The pipeline intervention problem asks how to best make costly changes to the transition matrices governing people's stochastic transitions through the graph, subject to a budget constraint. We consider two objectives: social welfare maximization, and a fairness-motivated maximin objective that seeks to maximize the value to the population (starting node) with the least expected value. We consider two variants of the maximin objective that turn out to be distinct, depending on whether we demand a deterministic solution or allow randomization. For each objective, we give an efficient approximation algorithm (an additive FPTAS) for constant width networks. We also tightly characterize the "price of fairness" in our setting: the ratio between the highest achievable social welfare and the highest social welfare consistent with a maximin optimal solution. Finally we show that for polynomial width networks, even approximating the maximin objective to any constant factor is NP hard, even for networks with constant depth. This shows that the restriction on the width in our positive results is essential.On the system loophole of generalized noncontextuality
Victor Gitton ETH Zurich
Generalized noncontextuality is a well-studied notion of classicality that is applicable to a single system, as opposed to Bell locality. It relies on representing operationally indistinguishable procedures identically in an ontological model. However, operational indistinguishability depends on the set of operations that one may use to distinguish two procedures: we refer to this set as the reference of indistinguishability. Thus, whether or not a given experiment is noncontextual depends on the choice of reference. The choices of references appearing in the literature are seldom discussed, but typically relate to a notion of system underlying the experiment. This shift in perspective then begs the question: how should one define the extent of the system underlying an experiment? Our paper primarily aims at exposing this question rather than providing a definitive answer to it. We start by formulating a notion of relative noncontextuality for prepare-and-measure scenarios, which is simply noncontextuality with respect to an explicit reference of indistinguishability. We investigate how verdicts of relative noncontextuality depend on this choice of reference, and in the process introduce the concept of the noncontextuality graph of a prepare-and-measure scenario. We then discuss several proposals that one may appeal to in order to fix the reference to a specific choice, and relate these proposals to different conceptions of what a system really is.
arXiv link: https://arxiv.org/abs/2209.04469
Zoom link: https://pitp.zoom.us/j/97393198973?pwd=dWhCOUJQLytxeXVIVmEvOHRnRHc1QT09
Newton’s Cradle Spectra
Barbara Soda Perimeter Institute for Theoretical Physics
We present broadly applicable nonperturbative results on the behavior of eigenvalues and eigenvectors under the addition of self-adjoint operators and under the multiplication of unitary operators, in finite-dimensional Hilbert spaces. To this end, we decompose these operations into elementary 1-parameter processes in which the eigenvalues move similarly to the spheres in Newton's cradle. As special cases, we recover level repulsion and Cauchy interlacing. We discuss two examples of applications. Applied to adiabatic quantum computing, we obtain new tools to relate algorithmic complexity to computational slowdown through gap narrowing. Applied to information theory, we obtain a generalization of Shannon sampling theory, the theory that establishes the equivalence of continuous and discrete representations of information. The new generalization of Shannon sampling applies to signals of varying information density and finite length.
Zoom link: https://pitp.zoom.us/j/94120657832?pwd=SmpsWFhhVCtyeXM3a0pVQU9lMGFLdz09
Algorithmic Challenges in Ensuring Fairness at the Time of Decision
Swati Gupta (Georgia Institute of Technology)Algorithmic decision-making in societal contexts, such as retail pricing, loan administration, recommendations on online platforms, etc., often involves experimentation with decisions for the sake of learning, which results in perceptions of unfairness among people impacted by these decisions. It is hence necessary to embed appropriate notions of fairness in such decision-making processes. The goal of this paper is to highlight the rich interface between temporal notions of fairness and online decision-making through a novel meta-objective of ensuring fairness at the time of decision. Given some arbitrary comparative fairness notion for static decision-making (e.g., students should pay at most 90% of the general adult price), a corresponding online decision-making algorithm satisfies fairness at the time of decision if the said notion of fairness is satisfied for any entity receiving a decision in comparison to all the past decisions. We show that this basic requirement introduces new methodological challenges in online decision-making. We illustrate the novel approaches necessary to address these challenges in the context of stochastic convex optimization with bandit feedback under a comparative fairness constraint that imposes lower bounds on the decisions received by entities depending on the decisions received by everyone in the past. The talk will showcase some novel research opportunities in online decision-making stemming from temporal fairness concerns. This is based on joint work with Vijay Kamble and Jad Salem.Improving Refugee Resettlement
Alex Teytelboym (University of Oxford)The current refugee resettlement system is inefficient because there are too few resettlement places and because refugees are resettled to locations where they might not thrive. I will overview some recent efforts to improve employment outcomes of refugees arriving to the United States. I will then describe some recent efforts to incorporate refugees' preferences in processes that match them to locations.Locality bounds on quantum dynamics with measurements
In non-relativistic systems, the Lieb-Robinson Theorem imposes an emergent speed limit (independent of the relativistic limit set by c), establishing locality under unitary quantum dynamics and constraining the time needed to perform useful quantum tasks. We have extended the Lieb-Robinson Theorem to quantum dynamics with measurements. In contrast to the general expectation that measurements can arbitrarily violate spatial locality, we find at most an (M+1)-fold enhancement to the speed of quantum information, provided the outcomes of M local measurements are known; this holds even when classical communication is instantaneous. Our bound is asymptotically optimal, and saturated by existing measurement-based protocols (the "quantum repeater"). Our bound tightly constrain the resource requirements for quantum computation, error correction, teleportation, generating entangled resource states (Bell, GHZ, W, and spin-squeezed states), and preparing SPT states from short-range entangled states.
Zoom Link: https://pitp.zoom.us/j/95640053536?pwd=Z05oWlFRSEFTZWFRK2dwcHdsWlBBdz09
QFT2 - Quantum Electrodynamics - Afternoon Lecture
Cliff Burgess McMaster University
This course uses quantum electrodynamics (QED) as a vehicle for covering several more advanced topics within quantum field theory, and so is aimed at graduate students that already have had an introductory course on quantum field theory. Among the topics hoped to be covered are: gauge invariance for massless spin-1 particles from special relativity and quantum mechanics; Ward identities; photon scattering and loops; UV and IR divergences and why they are handled differently; effective theories and the renormalization group; anomalies.