Displaying 13 - 24 of 384
Format results
-
-
-
-
-
-
-
Re-designing Recommendation on VolunteerMatch: Theory and Practice
Vahideh Manshadi (Yale University) -
Efficient and Targeted COVID-19 Border Testing via Reinforcement Learning
Kimon Drakopoulos (University of Southern California) -
Unpacking the Black Box: Regulating Algorithmic Decisions
Jann Spiess (Stanford University) -
Decision-Aware Learning for Global Health Supply Chains
Hamsa Bastani (Wharton School, University of Pennsylvania) -
-
What Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata
Lydia Liu (Cornell University)
-
TBD
Dean Eckles (MIT) *Presenting VirtuallyInstructions to join the fully virtual workshop session in the academic metaverse: https://immorlica.com/workshop.htm **Recording Notice** Once you enter Gathertown, you consent to being recorded. If do you do not wish to be recorded, you can: Make yourself anonymous Not enter the Gathertown space -
Games on Endogenous Networks
Evan Sadler (Columbia) *Presenting VirtuallyInstructions to join the fully virtual workshop session in the academic metaverse: https://immorlica.com/workshop.htm **Recording Notice** Once you enter Gathertown, you consent to being recorded. If do you do not wish to be recorded, you can: Make yourself anonymous Not enter the Gathertown space Abstract We study network games in which players choose both the partners with whom they associate and an action level (e.g., effort) that creates spillovers for those partners. We introduce a framework and two solution concepts, extending standard approaches for analyzing each choice in isolation: Nash equilibrium in actions and pairwise stability in links. Our main results show that, under suitable order conditions on incentives, stable networks take simple forms. The first condition concerns whether links create positive or negative payoff spillovers. The second concerns whether actions are strategic complements to links, or strategic substitutes. Together, these conditions yield a taxonomy of the relationship between network structure and economic primitives organized around two network architectures: ordered overlapping cliques and nested split graphs. We apply our model to understand the consequences of competition for status, to microfound matching models that assume clique formation, and to interpret empirical findings that highlight unintended consequences of group design. -
TBD
Asu Ozdaglar (MIT) *Presenting VirtuallyInstructions to join the fully virtual workshop session in the academic metaverse: https://immorlica.com/workshop.htm **Recording Notice** Once you enter Gathertown, you consent to being recorded. If do you do not wish to be recorded, you can: Make yourself anonymous Not enter the Gathertown space -
Learning through the Grapevine
Suraj Malladi (Cornell)We examine how well someone learns when information from original sources only reaches them after repeated person-to-person noisy relay. We characterize how many independent chains a learner needs to access in order to accurately learn, as these chains grow long. In the presence of random mutation of message content and trans- mission failures, there is a sharp threshold such that a receiver fully learns if they have access to more chains than the threshold number, and learn nothing if they have fewer. Moreover, we show that as the distance to primary sources grows, all learning comes from either the frequency or content of received messages, so learning only from the more informative dimension is equivalent to full Bayesian learning. However, even slight uncertainty over the relative rates of mutations makes learning from long chains impossible, no matter how many distinct sources information trickles down from. This suggests that forces which lengthen chains of communication can severely disrupt social learning, even if they increase the frequency of communication. -
Social Connectedness and Information Markets
Rachel Kranton (Duke)This paper introduces a simple model of contemporary information markets: Consumers prefer high-quality information, judiciously sharing stories and posts. High-quality stories are costly to produce, and overall quality is endogenous. When suppliers' payoffs derive from how many consumers view their stories, quality is highest when social connectedness is neither too high nor too low. Third-party misinformation can increase high-quality output, since consumers share more judiciously. In highly-connected markets, low-quality stories are widely seen and dominate. However, when suppliers' payoffs derive solely on consumer actions (e.g, votes or purchases) based on their stories and consumers are highly connected, consumers perfectly infer quality and quality is highest. -
Learning from Viral Content
Kevin He (Penn)(This work is joint with Krishna Dasaratha.) We study learning on social media with an equilibrium model of users interacting with shared news stories. Rational users arrive sequentially and each observes an original story (i.e., a private signal) and a sample of predecessors' stories in a news feed, then decides which stories to share. The observed sample of stories depends on what predecessors share as well as the sampling algorithm, which represents a design choice of the platform. We focus on how much the algorithm relies on virality (how many times a story has been previously shared) when generating news feeds. Showing users more viral stories can increase information aggregation, but it can also generate steady states where most shared stories are wrong. Such misleading steady states self-perpetuate, as users who observe these wrong stories develop wrong beliefs, and thus rationally continue to share them. We find that these bad steady states appear discontinuously, and even a benevolent platform designer either accepts these misleading steady states or induces fragile learning outcomes in the optimal design. -
Re-designing Recommendation on VolunteerMatch: Theory and Practice
Vahideh Manshadi (Yale University)In this talk, I describe our collaboration with VolunteerMatch (VM), the largest nationwide platform that connects nonprofits with volunteers. Through our work with VM, we have identified a key feature shared by many matching platforms (including Etsy, DonorsChoose, and VM): the supply side (e.g., nonprofits on the VM platform) not only relies on the platform’s internal recommendation algorithm to draw traffic but also utilizes other channels, such as social media, to attract external visitors. Such visitors arrive via direct links to their intended options, thus bypassing the platform’s recommendation algorithm. For example, of the 1.3 million monthly visitors to the VM platform, approximately 30% are external traffic directed to VM as a result of off-platform outreach activities, such as when nonprofits publicize volunteering opportunities on LinkedIn or Facebook. This motivated us to introduce the problem of online matching with multi-channel traffic, a variant of a canonical online matching problem. Taking a competitive analysis approach, we first demonstrate the shortcomings of a commonly-used algorithm that is optimal in the absence of external traffic. Then, we propose a new algorithm that achieves a near-optimal competitive ratio in certain regimes. Beyond theoretical guarantees, we demonstrate our algorithm’s practical effectiveness in simulations based on VM data. Time permitting, I will also report on implementing an improved recommendation algorithm on the VM platform and present data from our ensuing experimentation. (Joint work with Scott Rodilitz, Daniela Saban, and Akshaya Suresh) -
Efficient and Targeted COVID-19 Border Testing via Reinforcement Learning
Kimon Drakopoulos (University of Southern California)Throughout the coronavirus disease 2019 (COVID-19) pandemic, countries have relied on a variety of ad hoc border control protocols to allow for non-essential travel while safeguarding public health, from quarantining all travellers to restricting entry from select nations on the basis of population-level epidemiological metrics such as cases, deaths or testing positivity rates1,2. Here we report the design and performance of a reinforcement learning system, nicknamed Eva. In the summer of 2020, Eva was deployed across all Greek borders to limit the influx of asymptomatic travellers infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and to inform border policies through real-time estimates of COVID-19 prevalence. In contrast to country-wide protocols, Eva allocated Greece’s limited testing resources on the basis of incoming travellers’ demographic information and testing results from previous travellers. By comparing Eva’s performance against modelled counterfactual scenarios, we show that Eva identified 1.85 times as many asymptomatic, infected travellers as random surveillance testing, with up to 2–4 times as many during peak travel, and 1.25–1.45 times as many asymptomatic, infected travellers as testing policies that utilize only epidemiological metrics. We demonstrate that this latter benefit arises, at least partially, because population-level epidemiological metrics had limited predictive value for the actual prevalence of SARS-CoV-2 among asymptomatic travellers and exhibited strong country-specific idiosyncrasies in the summer of 2020. Our results raise serious concerns on the effectiveness of country-agnostic internationally proposed border control policies3 that are based on population-level epidemiological metrics. Instead, our work represents a successful example of the potential of reinforcement learning and real-time data for safeguarding public health. -
Unpacking the Black Box: Regulating Algorithmic Decisions
Jann Spiess (Stanford University)We show how to optimally regulate prediction algorithms in a world where (a) high-stakes decisions such as lending, medical testing or hiring are made by a complex `black-box' prediction functions, (b) there is an incentive conflict between the agent who designs the prediction function and a principal who oversees the use of the algorithm, and (c) the principal is limited in how much she can learn about the agent's black-box model. We show that limiting agents to prediction functions that are simple enough to be fully transparent is inefficient as long as the bias induced by misalignment between principal's and agent's preferences is small relative to the uncertainty about the true state of the world. Algorithmic audits can improve welfare, but the gains depend on the design of the audit tools. Tools that focus on minimizing overall information loss, the focus of many post-hoc explainer tools, will generally be inefficient since they focus on explaining the average behavior of the prediction function rather than those aspects that are most indicative of a misaligned choice. Targeted tools that focus on the source of incentive misalignment, e.g., excess false positives or racial disparities, can provide first-best solutions. We provide empirical support for our theoretical findings using an application in consumer lending. -
Decision-Aware Learning for Global Health Supply Chains
Hamsa Bastani (Wharton School, University of Pennsylvania)The combination of machine learning (for prediction) and optimization (for decision-making) is increasingly used in practice. However, a key challenge is the need to align the loss function used to train the machine learning model with the decision loss associated with the downstream optimization problem. Traditional solutions have limited flexibility in the model architecture and/or scale poorly to large datasets. We propose a "light-touch" decision-aware learning heuristic that uses a novel Taylor expansion of the optimal decision loss to derive the machine learning loss. Importantly, our approach only requires a simple re-weighting of the training data, allowing it to flexibly and scalably be incorporated into complex modern data science pipelines, yet producing sizable efficiency gains. We apply our framework to optimize the distribution of essential medicines in collaboration with policymakers at the Sierra Leone National Medical Supplies Agency; highly uncertain demand and limited budgets currently result in excessive unmet demand. We leverage random forests with meta-learning to learn complex cross-correlations across facilities, and apply our decision-aware learning approach to align the prediction loss with the objective of minimizing unmet demand. Out-of-sample results demonstrate that our end-to-end approach significantly reduces unmet demand across 1000+ health facilities throughout Sierra Leone. Joint work with O. Bastani, T.-H. Chung and V. Rostami. -
Supply-Side Equilibria in Recommender Systems
Jacob Steinhardt (UC Berkeley)Digital recommender systems such as Spotify and Netflix affect not only consumer behavior but also producer incentives: producers seek to supply content that will be recommended by the system. But what content will be produced? To understand this, we model users and content as D-dimensional vectors, and assume the system recommends the content that has the highest dot product with each user. In contrast to traditional economic models, here the producer decision space is high-dimensional and the user base is heterogeneous. This gives rise to new qualitative phenomena at equilibrium: the formation of genres,and the possibility of positive profit at equilibrium. We characterize these phenomena in terms of the geometry of the users and the structure of producer costs. At a conceptual level, our work serves as a starting point to investigate how recommender systems shape supply-side competition between producers. Joint work with Meena Jagadeesan and Nikhil Garg -
What Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata
Lydia Liu (Cornell University)From education to lending, consequential decisions in society increasingly rely on data-driven algorithms. Yet the long-term impact of algorithmic decision making is largely ill-understood, and there exist serious challenges to ensuring equitable benefits, in theory and practice. While the subject of algorithmic fairness has received much attention, algorithmic fairness criteria have significant limitations as tools for promoting equitable benefits. In this talk, we review various fairness desiderata in machine learning and when they may be in conflict. We then introduce the notion of delayed impact---the welfare impact of decision-making algorithms on populations after decision outcomes are observed, motivated, for example, by the change in average credit scores after a new loan approval algorithm is applied. We demonstrate that several statistical criteria for fair machine learning, if applied as a constraint to decision-making, can result in harm to the welfare of a disadvantaged population. We end by considering future directions for fairness in machine learning that evince a holistic and interdisciplinary approach.