Search results from Simons Institute
133 - 144 of 384 Results
Format results
-
-
-
Information, Learning, And Incentive Design For Societal Networks
Manxi Wu (UC Berkeley) -
Fair And Reliable Machine Learning For High-Stakes Applications:approaches Using Information Theory
Sanghamitra Dutta (JP Morgan) -
Mechanism Design Via Machine Learning: Overfitting, Incentives, and Privacy
Ellen Vitercik (UC Berkeley) -
Efficient Universal Estimators For Symmetric Property Estimation
Kirankumar Shiragur (Stanford University) -
-
-
-
-
-
-
Causal Inference With Corrupted Data: Measurement Error, Missing Values, Discretization, And Differential Privacy
Rahul Singh (MIT)Even the most carefully curated economic data sets have variables that are noisy, missing, discretized, or privatized. The standard workflow for empirical research involves data cleaning followed by data analysis that typically ignores the bias and variance consequences of data cleaning. We formulate a semiparametric model for causal inference with corrupted data to encompass both data cleaning and data analysis. We propose a new end-to-end procedure for data cleaning, estimation, and inference with data cleaning-adjusted confidence intervals. We prove consistency, Gaussian approximation, and semiparametric efficiency for our estimator of the causal parameter by finite sample arguments. The rate of Gaussian approximation is $n^{-1/2}$ for global parameters such as average treatment effect, and it degrades gracefully for local parameters such as heterogeneous treatment effect for a specific demographic. Our key assumption is that the true covariates are approximately low rank. In our analysis, we provide nonasymptotic theoretical contributions to matrix completion, statistical learning, and semiparametric statistics. We verify the coverage of the data cleaning-adjusted confidence intervals in simulations calibrated to resemble differential privacy as implemented in the 2020 US Census. -
Causal Inference And Data-Driven Decision-Making
Angela Zhou (UC Berkeley)I will provide a brief overview of previous work on credible inference in the context of causal inference and statistical machine learning, and discuss ongoing directions on interfaces of causal inference embedded in complex operational systems. -
Information, Learning, And Incentive Design For Societal Networks
Manxi Wu (UC Berkeley)Today's data-rich platforms are reshaping the operations of societal networks by providing information, recommendations, and matching services to a large number of users. How can we model the behavior of human agents in response to services provided by these platforms, and develop tools to improve the aggregate outcomes in a socially desirable manner? In this talk, I will briefly summarize our works that tackle this question from three aspects: 1) Game-theoretic analysis of the impact of information platforms (navigation apps) on the strategic behavior and learning processes of travelers in uncertain networks; 2) Market mechanism design for efficient carpooling and toll pricing in the presence of autonomous driving technology; 3) Security analysis and resource allocation for robustness under random or adversarial disruptions. -
Fair And Reliable Machine Learning For High-Stakes Applications:approaches Using Information Theory
Sanghamitra Dutta (JP Morgan)How do we make machine learning (ML) algorithms fair and reliable? This is particularly important today as ML enters high-stakes applications such as hiring and education, often adversely affecting people's lives with respect to gender, race, etc., and also violating anti-discrimination laws. When it comes to resolving legal disputes or even informing policies and interventions, only identifying bias/disparity in a model's decision is insufficient. We really need to dig deeper into how it arose. E.g., disparities in hiring that can be explained by an occupational necessity (code-writing skills for software engineering) may be exempt by law, but the disparity arising due to an aptitude test may not be (Ref: Griggs v. Duke Power `71). This leads us to a question that bridges the fields of fairness, explainability, and law: How can we identify and explain the sources of disparity in ML models, e.g., did the disparity entirely arise due to the critical occupational necessities? In this talk, I propose a systematic measure of "non-exempt disparity," i.e., the bias which cannot be explained by the occupational necessities. To arrive at a measure for the non-exempt disparity, I adopt a rigorous axiomatic approach that brings together concepts in information theory (in particular, an emerging body of work called Partial Information Decomposition) with causality. -
Mechanism Design Via Machine Learning: Overfitting, Incentives, and Privacy
Ellen Vitercik (UC Berkeley)Machine learning is increasingly being used for mechanism design, with applications such as price optimization on online marketplaces and ad auction design. In this talk, I will give an overview of my research on mechanism design via machine learning, touching on statistical problems such as overfitting, incentive problems, and privacy preservation. -
Efficient Universal Estimators For Symmetric Property Estimation
Kirankumar Shiragur (Stanford University)Given i.i.d samples from an unknown distribution, estimating its symmetric properties is a classical problem in information theory, statistics and computer science. Symmetric properties are those that are invariant to label permutations and include popular functionals such as entropy and support size. Early work on this question dates back to the 1940s when R. A. Fisher and A. S. Corbet studied this to estimate the number of distinct butterfly species in Malaysia. Over the past decade, this question has received great attention leading to computationally efficient and sample optimal estimators for various symmetric properties. All these estimators were property specific and the design of a single estimator that is sample optimal for any symmetric property remained a central open problem in the area. In a recent breakthrough, Acharya et. al. showed that computing an approximate profile maximum likelihood (PML), a distribution that maximizes the likelihood of the observed multiset of frequencies, allows statistically optimal estimation of any symmetric property. However, since its introduction by Orlitsky et. al. in 2004, efficient computation of an approximate PML remained a well known open problem. In our work, we resolved this question by designing the first efficient algorithm for computing an approximate PML distribution. In addition, our investigations have led to a deeper understanding of various computational and statistical aspects of PML and universal estimators. -
Multi-Agent Reinforcement Learning (Part II)
Chi Jin (Princeton University)Reinforcement learning (RL) has made substantial empirical progress in solving hard AI challenges in the past few years. A large fraction of these progresses—Go, Dota 2, Starcraft 2, economic simulation, social behavior learning, and so on—come from multi-agent RL, that is, sequential decision making involving more than one agent. While the theoretical study of single-agent RL has a long history and a vastly growing recent interest, multi-agent RL theory is arguably a newer and less developed field, with its own unique challenges and opportunities. In this tutorial, we present an overview of recent theoretical developments in multi-agent RL. We will focus on the model of Markov games, and cover basic formulations, objectives, as well as learning algorithms with their theoretical guarantees. We will discuss the inefficiency of classical algorithms---self-play, fictitious play, etc., and then proceed with recent advances in provably efficient learning algorithms under various regimes including two-player zero-sum games, multiplayer general-sum games, exploration, and large state space (function approximation). -
Multi-Agent Reinforcement Learning (Part I)
Chi Jin (Princeton University)Reinforcement learning (RL) has made substantial empirical progress in solving hard AI challenges in the past few years. A large fraction of these progresses—Go, Dota 2, Starcraft 2, economic simulation, social behavior learning, and so on—come from multi-agent RL, that is, sequential decision making involving more than one agent. While the theoretical study of single-agent RL has a long history and a vastly growing recent interest, multi-agent RL theory is arguably a newer and less developed field, with its own unique challenges and opportunities. In this tutorial, we present an overview of recent theoretical developments in multi-agent RL. We will focus on the model of Markov games, and cover basic formulations, objectives, as well as learning algorithms with their theoretical guarantees. We will discuss the inefficiency of classical algorithms---self-play, fictitious play, etc., and then proceed with recent advances in provably efficient learning algorithms under various regimes including two-player zero-sum games, multiplayer general-sum games, exploration, and large state space (function approximation). -
Reinforcement Learning (Part II)
Dylan Foster (Microsoft Research)This tutorial will give an overview of the theoretical foundations of reinforcement learning, a promising paradigm for developing AI systems capable of making data-driven decisions in unknown environments. The first part of the tutorial will cover introductory concepts such as problem formulations, planning in Markov decision processes (MDPs), exploration, and generalization; no prior background will be assumed. Building on these concepts, the main aim of the tutorial will be to give a bird's-eye view of the statistical landscape of reinforcement learning (e.g., what modeling assumptions lead to sample-efficient algorithms), with a focus on algorithmic principles and fundamental limits. Topics covered will range from basic challenges and solutions (exploration in tabular RL, policy gradient methods, contextual bandits) to the current frontier of understanding. A running theme will be connections and parallels between supervised learning and reinforcement learning. Time permitting, we will touch on additional topics such as reinforcement learning with offline data. -
Reinforcement Learning (Part I)
Dylan Foster (Microsoft Research)This tutorial will give an overview of the theoretical foundations of reinforcement learning, a promising paradigm for developing AI systems capable of making data-driven decisions in unknown environments. The first part of the tutorial will cover introductory concepts such as problem formulations, planning in Markov decision processes (MDPs), exploration, and generalization; no prior background will be assumed. Building on these concepts, the main aim of the tutorial will be to give a bird's-eye view of the statistical landscape of reinforcement learning (e.g., what modeling assumptions lead to sample-efficient algorithms), with a focus on algorithmic principles and fundamental limits. Topics covered will range from basic challenges and solutions (exploration in tabular RL, policy gradient methods, contextual bandits) to the current frontier of understanding. A running theme will be connections and parallels between supervised learning and reinforcement learning. Time permitting, we will touch on additional topics such as reinforcement learning with offline data. -
Learning and Incentives (Part IV)
Nika Haghtalab (UC Berkeley)Classically, the outcome of a learning algorithm is considered in isolation from the effects that it may have on the process that generates the data or the party who is interested in learning. In today's world, increasingly more people and organizations interact with learning systems, making it necessary to consider these effects. This tutorial will cover the mathematical foundation for addressing learning and learnability in the presence of economic and social forces. We will cover recent advances in the theory of machine learning and algorithmic economics. In the first half of this tutorial, we will consider strategic and adversarial interactions between learning algorithms and those affected by algorithmic actions. What makes these interactions especially powerful is that they often occur over a long-term basis and can corrupt data patterns that are essential for machine learning. In this part of the talk, we work with online decision processes (such as no-regret learning) and solution concepts (such as stackelberg and minmax equilibria) to discuss statistical and computational aspects of learning and learnability in the presence of such interactions. In the second half of the tutorial, we will consider collaborative interactions in machine learning. What makes these interactions especially beneficial is their ability to learn across multiple stakeholders' tasks. In this part of the talk, we see how online algorithms act as a medium for effective collaborations. We also discuss challenges involved in designing efficient data sharing mechanisms that fully account for learner's incentives. -
Learning and Incentives (Part III)
Nika Haghtalab (UC Berkeley)Classically, the outcome of a learning algorithm is considered in isolation from the effects that it may have on the process that generates the data or the party who is interested in learning. In today's world, increasingly more people and organizations interact with learning systems, making it necessary to consider these effects. This tutorial will cover the mathematical foundation for addressing learning and learnability in the presence of economic and social forces. We will cover recent advances in the theory of machine learning and algorithmic economics. In the first half of this tutorial, we will consider strategic and adversarial interactions between learning algorithms and those affected by algorithmic actions. What makes these interactions especially powerful is that they often occur over a long-term basis and can corrupt data patterns that are essential for machine learning. In this part of the talk, we work with online decision processes (such as no-regret learning) and solution concepts (such as stackelberg and minmax equilibria) to discuss statistical and computational aspects of learning and learnability in the presence of such interactions. In the second half of the tutorial, we will consider collaborative interactions in machine learning. What makes these interactions especially beneficial is their ability to learn across multiple stakeholders' tasks. In this part of the talk, we see how online algorithms act as a medium for effective collaborations. We also discuss challenges involved in designing efficient data sharing mechanisms that fully account for learner's incentives.