Format results
- Hima Lakkaraju (Harvard University)
Newton’s Cradle Spectra
Barbara Soda Perimeter Institute for Theoretical Physics
Algorithmic Challenges in Ensuring Fairness at the Time of Decision
Swati Gupta (Georgia Institute of Technology)QFT2 - Quantum Electrodynamics - Afternoon Lecture
Cliff Burgess McMaster University
Academia, Government, & Industry in the 2020 Disclosure Avoidance System
Philip LeClerc (U.S. Census Bureau)The Supersymmetric Index and its Holographic Interpretation
Ohad Mamroud Weizmann Institute of Science
A Kerfuffle: Differential Privacy and the 2020 Census
Aloni Cohen (Boston University)Stress as a Background in Dark Matter Direct Detection Experiments and Source of Decoherence in Superconducting Qubits
Roger Romani University of California, Berkeley
Bringing Order to Chaos: Navigating the Disagreement Problem in Explainable ML
Hima Lakkaraju (Harvard University)As various post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to develop a deeper understanding of if and when the explanations output by these methods disagree with each other, why these disagreements occur, and how to address these disagreements in a rigorous fashion. However, there is little to no research that provides answers to these critical questions. In this talk, I will present some of our recent research which addresses the aforementioned questions. More specifically, I will discuss i) a novel quantitative framework to formalize the disagreement between state-of-the-art feature attribution based explanation methods (e.g., LIME, SHAP, Gradient based methods). I will also touch upon on how this framework was constructed by leveraging inputs from interviews and user studies with data scientists who utilize explanation methods in their day-to-day work; ii) an online user study to understand how data scientists resolve disagreements in explanations output by the aforementioned methods; iii) a novel function approximation framework to explain why explanation methods often disagree with each other. I will demonstrate that all the key feature attribution based explanation methods are essentially performing local function approximations albeit, with different loss functions and notions of neighborhood. (iv) a set of guiding principles on how to choose explanation methods and resulting explanations when they disagree in real-world settings. I will conclude this talk by presenting a brief overview of an open source framework that we recently developed called Open-XAI which enables researchers and practitioners to seamlessly evaluate and benchmark both existing and new explanation methods based on various characteristics such as faithfulness, stability, and fairness.Pipeline Interventions
Juba Ziani (Georgia Tech)We introduce the pipeline intervention problem, defined by a layered directed acyclic graph and a set of stochastic matrices governing transitions between successive layers. The graph is a stylized model for how people from different populations are presented opportunities, eventually leading to some reward. In our model, individuals are born into an initial position (i.e. some node in the first layer of the graph) according to a fixed probability distribution, and then stochastically progress through the graph according to the transition matrices, until they reach a node in the final layer of the graph; each node in the final layer has a reward associated with it. The pipeline intervention problem asks how to best make costly changes to the transition matrices governing people's stochastic transitions through the graph, subject to a budget constraint. We consider two objectives: social welfare maximization, and a fairness-motivated maximin objective that seeks to maximize the value to the population (starting node) with the least expected value. We consider two variants of the maximin objective that turn out to be distinct, depending on whether we demand a deterministic solution or allow randomization. For each objective, we give an efficient approximation algorithm (an additive FPTAS) for constant width networks. We also tightly characterize the "price of fairness" in our setting: the ratio between the highest achievable social welfare and the highest social welfare consistent with a maximin optimal solution. Finally we show that for polynomial width networks, even approximating the maximin objective to any constant factor is NP hard, even for networks with constant depth. This shows that the restriction on the width in our positive results is essential.On the system loophole of generalized noncontextuality
Victor Gitton ETH Zurich
Generalized noncontextuality is a well-studied notion of classicality that is applicable to a single system, as opposed to Bell locality. It relies on representing operationally indistinguishable procedures identically in an ontological model. However, operational indistinguishability depends on the set of operations that one may use to distinguish two procedures: we refer to this set as the reference of indistinguishability. Thus, whether or not a given experiment is noncontextual depends on the choice of reference. The choices of references appearing in the literature are seldom discussed, but typically relate to a notion of system underlying the experiment. This shift in perspective then begs the question: how should one define the extent of the system underlying an experiment? Our paper primarily aims at exposing this question rather than providing a definitive answer to it. We start by formulating a notion of relative noncontextuality for prepare-and-measure scenarios, which is simply noncontextuality with respect to an explicit reference of indistinguishability. We investigate how verdicts of relative noncontextuality depend on this choice of reference, and in the process introduce the concept of the noncontextuality graph of a prepare-and-measure scenario. We then discuss several proposals that one may appeal to in order to fix the reference to a specific choice, and relate these proposals to different conceptions of what a system really is.
arXiv link: https://arxiv.org/abs/2209.04469
Zoom link: https://pitp.zoom.us/j/97393198973?pwd=dWhCOUJQLytxeXVIVmEvOHRnRHc1QT09
Newton’s Cradle Spectra
Barbara Soda Perimeter Institute for Theoretical Physics
We present broadly applicable nonperturbative results on the behavior of eigenvalues and eigenvectors under the addition of self-adjoint operators and under the multiplication of unitary operators, in finite-dimensional Hilbert spaces. To this end, we decompose these operations into elementary 1-parameter processes in which the eigenvalues move similarly to the spheres in Newton's cradle. As special cases, we recover level repulsion and Cauchy interlacing. We discuss two examples of applications. Applied to adiabatic quantum computing, we obtain new tools to relate algorithmic complexity to computational slowdown through gap narrowing. Applied to information theory, we obtain a generalization of Shannon sampling theory, the theory that establishes the equivalence of continuous and discrete representations of information. The new generalization of Shannon sampling applies to signals of varying information density and finite length.
Zoom link: https://pitp.zoom.us/j/94120657832?pwd=SmpsWFhhVCtyeXM3a0pVQU9lMGFLdz09
Algorithmic Challenges in Ensuring Fairness at the Time of Decision
Swati Gupta (Georgia Institute of Technology)Algorithmic decision-making in societal contexts, such as retail pricing, loan administration, recommendations on online platforms, etc., often involves experimentation with decisions for the sake of learning, which results in perceptions of unfairness among people impacted by these decisions. It is hence necessary to embed appropriate notions of fairness in such decision-making processes. The goal of this paper is to highlight the rich interface between temporal notions of fairness and online decision-making through a novel meta-objective of ensuring fairness at the time of decision. Given some arbitrary comparative fairness notion for static decision-making (e.g., students should pay at most 90% of the general adult price), a corresponding online decision-making algorithm satisfies fairness at the time of decision if the said notion of fairness is satisfied for any entity receiving a decision in comparison to all the past decisions. We show that this basic requirement introduces new methodological challenges in online decision-making. We illustrate the novel approaches necessary to address these challenges in the context of stochastic convex optimization with bandit feedback under a comparative fairness constraint that imposes lower bounds on the decisions received by entities depending on the decisions received by everyone in the past. The talk will showcase some novel research opportunities in online decision-making stemming from temporal fairness concerns. This is based on joint work with Vijay Kamble and Jad Salem.Improving Refugee Resettlement
Alex Teytelboym (University of Oxford)The current refugee resettlement system is inefficient because there are too few resettlement places and because refugees are resettled to locations where they might not thrive. I will overview some recent efforts to improve employment outcomes of refugees arriving to the United States. I will then describe some recent efforts to incorporate refugees' preferences in processes that match them to locations.Locality bounds on quantum dynamics with measurements
In non-relativistic systems, the Lieb-Robinson Theorem imposes an emergent speed limit (independent of the relativistic limit set by c), establishing locality under unitary quantum dynamics and constraining the time needed to perform useful quantum tasks. We have extended the Lieb-Robinson Theorem to quantum dynamics with measurements. In contrast to the general expectation that measurements can arbitrarily violate spatial locality, we find at most an (M+1)-fold enhancement to the speed of quantum information, provided the outcomes of M local measurements are known; this holds even when classical communication is instantaneous. Our bound is asymptotically optimal, and saturated by existing measurement-based protocols (the "quantum repeater"). Our bound tightly constrain the resource requirements for quantum computation, error correction, teleportation, generating entangled resource states (Bell, GHZ, W, and spin-squeezed states), and preparing SPT states from short-range entangled states.
Zoom Link: https://pitp.zoom.us/j/95640053536?pwd=Z05oWlFRSEFTZWFRK2dwcHdsWlBBdz09
QFT2 - Quantum Electrodynamics - Afternoon Lecture
Cliff Burgess McMaster University
This course uses quantum electrodynamics (QED) as a vehicle for covering several more advanced topics within quantum field theory, and so is aimed at graduate students that already have had an introductory course on quantum field theory. Among the topics hoped to be covered are: gauge invariance for massless spin-1 particles from special relativity and quantum mechanics; Ward identities; photon scattering and loops; UV and IR divergences and why they are handled differently; effective theories and the renormalization group; anomalies.
Academia, Government, & Industry in the 2020 Disclosure Avoidance System
Philip LeClerc (U.S. Census Bureau)The U.S. Census Bureau adopted formally private methods to protect the principal products released based on the 2020 Decennial Census of Population and Housing. These include the Public Law 94-171 Redistricting Data Summary File (already released), the Demographic and Housing Characteristics File (DHC; in its final phase of privacy budget tuning), as well as the Detailed Demographic and Housing Characteristics File and Supplemental Demographic and Housing Characteristics File releases (in earlier phases of design, testing, and planning). Additional, smaller product releases based on the 2020 confidential data are also expected, with sub-state releases currently required to use differentially private methods. In this talk, I describe the design and a few of the major technical issues encountered in developing the TopDown algorithm (TDA), the principal formally private algorithm used to protect the PL94-171 release, and expected to be used to protect the DHC release. TDA was designed by a joint team of academic, contractor and government employees; I discuss the ways in which this collaboration worked, as well as what worked well and what was challenging, and briefly touch on the role of industry in algorithm design outside of TDA. I close with some general thoughts on ways to help form productive collaborations between academic, government, and industry expertise in formally private methods.The Supersymmetric Index and its Holographic Interpretation
Ohad Mamroud Weizmann Institute of Science
The supersymmetric index of N=4 SU(N) Super Yang-Mills is a well studied quantity. In 2104.13932, using the Bethe Ansatz approach, we analyzed some family of contributions to it. In the large N limit each term in this family has a holographic interpretation - it matches the contribution of a different Euclidean black hole to the partition function of the dual gravitational theory. By taking into account non-perturbative contributions (wrapped D3-branes, similar to Euclidean giant gravitons), we further showed a one to one match between the contributions of the gravitational saddles and this family of contributions to the index, both at the perturbative and non-perturbative levels. I'll end with newer results, concerning the form of these terms at finite N, new solutions to the Bethe Ansatz equations (i.e. additional contributions to the index beyond the ones described in that paper), and some ongoing effort to classify all the solutions to these equations.
Zoom Link: https://pitp.zoom.us/j/95037315617?pwd=ell4WExrSXJ4YUVyaXAzRGJjdjYxUT09
A Kerfuffle: Differential Privacy and the 2020 Census
Aloni Cohen (Boston University)Kerfuffle (/kərˈfəfəl/): a commotion or fuss, especially one caused by conflicting views. "There was a kerfuffle over the use of differential privacy for the 2020 Census." This talk will give a too-brief introduction to some of the issues that played out in tweets, court proceedings, and academic preprints. We'll also discuss approaches and challenges to understanding the effect of differential privacy on downstream policy.Stress as a Background in Dark Matter Direct Detection Experiments and Source of Decoherence in Superconducting Qubits
Roger Romani University of California, Berkeley
With no hints of dark matter in the "classical WIMP" region of parameter space, experimentalists have begun searching in earnest for low mass (MeV-GeV scale) dark matter. However, efforts to probe this region of parameter space have been hindered by an unexpected and mysterious source of background events, dubbed the "low energy excess." Recently, mechanical stress has been shown to create a "low energy excess"-like source of events, and a microphysical picture of how stress creates this background is emerging. In addition to providing a path forward for low mass dark matter searches, these results may address several outstanding problems limiting the performance of superconducting quantum computers.
Zoom Link: https://pitp.zoom.us/j/92147233613?pwd=RW5PaUNUZlE3SUNnTlZHaVFrdnV3dz09