Format results
Localizing Information in Quantum Gravity and State-dressed Local Operators in AdS/CFT
Alexandre Belin European Organization for Nuclear Research (CERN)
Next Generation Axion Dark Matter Searches
Andrew Sonnenschein Fermi National Accelerator Laboratory (Fermilab)
Re-designing Recommendation on VolunteerMatch: Theory and Practice
Vahideh Manshadi (Yale University)Counting the microstates of the cosmic horizon
Vasudev Shyam Stealth Startup
Efficient and Targeted COVID-19 Border Testing via Reinforcement Learning
Kimon Drakopoulos (University of Southern California)On the modeling of black hole ringdown
Naritaka Oshita Kyoto University
Unpacking the Black Box: Regulating Algorithmic Decisions
Jann Spiess (Stanford University)Decision-Aware Learning for Global Health Supply Chains
Hamsa Bastani (Wharton School, University of Pennsylvania)What Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata
Lydia Liu (Cornell University)Predictive Modeling in Healthcare – Special Considerations
Noa Dagan (Clalit Health Services)
Localizing Information in Quantum Gravity and State-dressed Local Operators in AdS/CFT
Alexandre Belin European Organization for Nuclear Research (CERN)
It is well known that quantum information can be strictly localized in quantum field theory. Similarly, one can also localize information in classical gravity up to quantities like the ADM mass which are fixed by the constraints of general relativity. On the other hand, the holographic nature of quantum gravity suggests that information can never be localized deep inside some spacetime region, and is always accessible from the boundary. This is meant to hold as a non-perturbative statement and it remains to be understood whether quantum information can be localized within G_N perturbation theory. In this talk, I will address this problem from the point of view of the AdS/CFT correspondence. I will construct candidate local operators that can be used to localize information deep inside the bulk. They have the following two properties: they act just like standard HKLL operators to leading order at large N, but commute with the CFT Hamiltonian to all orders in 1/N. These operators can only be constructed in a particular class of states which have a large energy variance, for example coherent states corresponding to semi-classical geometries. The interpretation of these operators is that they are dressed with respect to a feature of the state, rather than to the boundary. I will comment on connections with black holes and computations of the Page curve.
Zoom link: https://pitp.zoom.us/j/94678968773?pwd=NUJhOEJmRWxLa3pCVUtVVi9DdkE3QT09
Next Generation Axion Dark Matter Searches
Andrew Sonnenschein Fermi National Accelerator Laboratory (Fermilab)
In the early 1980s, axions and WIMPs were identified as promising dark matter candidates. The last forty years have seen a spectacularly successful experimental program attempting to discover the WIMPs, with sensitivity that has by now improved by many orders of magnitude compared to the earliest results. The parallel program to search for axions has made less progress and has reached the necessary sensitivity only over a very limited mass range. However, progress has recently accelerated, with the invention of many new axion detection techniques that may eventually provide a definitive answer to the question of whether the dark matter is made of axions. I will review some of these new developments with emphasis on Fermilab’s program, including ADMX- Extended Frequency Range and Broadband Reflector Experiment for Axion Detection (BREAD).
Zoom link: https://pitp.zoom.us/j/97234421735?pwd=UGNJRWxYMkErRmdWSnJiWTdoOFNaZz09
Re-designing Recommendation on VolunteerMatch: Theory and Practice
Vahideh Manshadi (Yale University)In this talk, I describe our collaboration with VolunteerMatch (VM), the largest nationwide platform that connects nonprofits with volunteers. Through our work with VM, we have identified a key feature shared by many matching platforms (including Etsy, DonorsChoose, and VM): the supply side (e.g., nonprofits on the VM platform) not only relies on the platform’s internal recommendation algorithm to draw traffic but also utilizes other channels, such as social media, to attract external visitors. Such visitors arrive via direct links to their intended options, thus bypassing the platform’s recommendation algorithm. For example, of the 1.3 million monthly visitors to the VM platform, approximately 30% are external traffic directed to VM as a result of off-platform outreach activities, such as when nonprofits publicize volunteering opportunities on LinkedIn or Facebook. This motivated us to introduce the problem of online matching with multi-channel traffic, a variant of a canonical online matching problem. Taking a competitive analysis approach, we first demonstrate the shortcomings of a commonly-used algorithm that is optimal in the absence of external traffic. Then, we propose a new algorithm that achieves a near-optimal competitive ratio in certain regimes. Beyond theoretical guarantees, we demonstrate our algorithm’s practical effectiveness in simulations based on VM data. Time permitting, I will also report on implementing an improved recommendation algorithm on the VM platform and present data from our ensuing experimentation. (Joint work with Scott Rodilitz, Daniela Saban, and Akshaya Suresh)Counting the microstates of the cosmic horizon
Vasudev Shyam Stealth Startup
I will describe a holographic model for the three dimensional de Sitter static patch where the boundary theory is the so called $T\bar{T}+\Lambda_2$ deformation of the conformal field theory dual to AdS_3 quantum gravity. This identification allows us to obtain the cosmic horizon entropy from a microstate count, and the microstates themselves are a dressed version of those that account for the entropy of certain black holes in AdS space. I will also show how the effect of this dressing at the cosmic horizon is to replace the spacetime dependence of the fields of the undeformed holographic CFT with dependence on the indices of large matrices.
Zoom link: https://pitp.zoom.us/j/95396921570?pwd=NGFoOGlGY1ZDU2pnNFRwWit3b2w0Zz09
Efficient and Targeted COVID-19 Border Testing via Reinforcement Learning
Kimon Drakopoulos (University of Southern California)Throughout the coronavirus disease 2019 (COVID-19) pandemic, countries have relied on a variety of ad hoc border control protocols to allow for non-essential travel while safeguarding public health, from quarantining all travellers to restricting entry from select nations on the basis of population-level epidemiological metrics such as cases, deaths or testing positivity rates1,2. Here we report the design and performance of a reinforcement learning system, nicknamed Eva. In the summer of 2020, Eva was deployed across all Greek borders to limit the influx of asymptomatic travellers infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and to inform border policies through real-time estimates of COVID-19 prevalence. In contrast to country-wide protocols, Eva allocated Greece’s limited testing resources on the basis of incoming travellers’ demographic information and testing results from previous travellers. By comparing Eva’s performance against modelled counterfactual scenarios, we show that Eva identified 1.85 times as many asymptomatic, infected travellers as random surveillance testing, with up to 2–4 times as many during peak travel, and 1.25–1.45 times as many asymptomatic, infected travellers as testing policies that utilize only epidemiological metrics. We demonstrate that this latter benefit arises, at least partially, because population-level epidemiological metrics had limited predictive value for the actual prevalence of SARS-CoV-2 among asymptomatic travellers and exhibited strong country-specific idiosyncrasies in the summer of 2020. Our results raise serious concerns on the effectiveness of country-agnostic internationally proposed border control policies3 that are based on population-level epidemiological metrics. Instead, our work represents a successful example of the potential of reinforcement learning and real-time data for safeguarding public health.On the modeling of black hole ringdown
Naritaka Oshita Kyoto University
A gravitational wave from a binary black hole merger is an important probe to test gravity. Especially, the observation of ringdown may allow us to perform a robust test of gravity as it is a superposition of excited quasi-normal (QN) modes of a Kerr black hole. The excitation factor is an important quantity that quantifies the excitability of QN modes and is independent of the initial data of the black hole.
In this talk, I will show which QN modes can be important (i.e., have higher excitation factors) and will discuss how we can determine the start time of ringdown to maximally enhance the detectability of the QN modes.
Also, I will introduce my recent conjecture on the modeling of ringdown waveform:
the thermal ringdown model in which the ringdown of a small mass ratio merger involving a spinning black hole can be modeled by the Fermi-Dirac distribution.
Zoom link: https://pitp.zoom.us/j/96739417230?pwd=Tm00eHhxNzRaOEQvaGNzTE85Z1ZJdz09
Unpacking the Black Box: Regulating Algorithmic Decisions
Jann Spiess (Stanford University)We show how to optimally regulate prediction algorithms in a world where (a) high-stakes decisions such as lending, medical testing or hiring are made by a complex `black-box' prediction functions, (b) there is an incentive conflict between the agent who designs the prediction function and a principal who oversees the use of the algorithm, and (c) the principal is limited in how much she can learn about the agent's black-box model. We show that limiting agents to prediction functions that are simple enough to be fully transparent is inefficient as long as the bias induced by misalignment between principal's and agent's preferences is small relative to the uncertainty about the true state of the world. Algorithmic audits can improve welfare, but the gains depend on the design of the audit tools. Tools that focus on minimizing overall information loss, the focus of many post-hoc explainer tools, will generally be inefficient since they focus on explaining the average behavior of the prediction function rather than those aspects that are most indicative of a misaligned choice. Targeted tools that focus on the source of incentive misalignment, e.g., excess false positives or racial disparities, can provide first-best solutions. We provide empirical support for our theoretical findings using an application in consumer lending.Decision-Aware Learning for Global Health Supply Chains
Hamsa Bastani (Wharton School, University of Pennsylvania)The combination of machine learning (for prediction) and optimization (for decision-making) is increasingly used in practice. However, a key challenge is the need to align the loss function used to train the machine learning model with the decision loss associated with the downstream optimization problem. Traditional solutions have limited flexibility in the model architecture and/or scale poorly to large datasets. We propose a "light-touch" decision-aware learning heuristic that uses a novel Taylor expansion of the optimal decision loss to derive the machine learning loss. Importantly, our approach only requires a simple re-weighting of the training data, allowing it to flexibly and scalably be incorporated into complex modern data science pipelines, yet producing sizable efficiency gains. We apply our framework to optimize the distribution of essential medicines in collaboration with policymakers at the Sierra Leone National Medical Supplies Agency; highly uncertain demand and limited budgets currently result in excessive unmet demand. We leverage random forests with meta-learning to learn complex cross-correlations across facilities, and apply our decision-aware learning approach to align the prediction loss with the objective of minimizing unmet demand. Out-of-sample results demonstrate that our end-to-end approach significantly reduces unmet demand across 1000+ health facilities throughout Sierra Leone. Joint work with O. Bastani, T.-H. Chung and V. Rostami.Supply-Side Equilibria in Recommender Systems
Jacob Steinhardt (UC Berkeley)Digital recommender systems such as Spotify and Netflix affect not only consumer behavior but also producer incentives: producers seek to supply content that will be recommended by the system. But what content will be produced? To understand this, we model users and content as D-dimensional vectors, and assume the system recommends the content that has the highest dot product with each user. In contrast to traditional economic models, here the producer decision space is high-dimensional and the user base is heterogeneous. This gives rise to new qualitative phenomena at equilibrium: the formation of genres,and the possibility of positive profit at equilibrium. We characterize these phenomena in terms of the geometry of the users and the structure of producer costs. At a conceptual level, our work serves as a starting point to investigate how recommender systems shape supply-side competition between producers. Joint work with Meena Jagadeesan and Nikhil GargWhat Really Matters for Fairness in Machine Learning: Delayed Impact and Other Desiderata
Lydia Liu (Cornell University)From education to lending, consequential decisions in society increasingly rely on data-driven algorithms. Yet the long-term impact of algorithmic decision making is largely ill-understood, and there exist serious challenges to ensuring equitable benefits, in theory and practice. While the subject of algorithmic fairness has received much attention, algorithmic fairness criteria have significant limitations as tools for promoting equitable benefits. In this talk, we review various fairness desiderata in machine learning and when they may be in conflict. We then introduce the notion of delayed impact---the welfare impact of decision-making algorithms on populations after decision outcomes are observed, motivated, for example, by the change in average credit scores after a new loan approval algorithm is applied. We demonstrate that several statistical criteria for fair machine learning, if applied as a constraint to decision-making, can result in harm to the welfare of a disadvantaged population. We end by considering future directions for fairness in machine learning that evince a holistic and interdisciplinary approach.Predictive Modeling in Healthcare – Special Considerations
Noa Dagan (Clalit Health Services)Prediction models in healthcare are being utilized for many tasks. However, the use of these models for medical decision-making warrants special considerations that are less critical when prediction models are used in other domains. Two of these considerations, which we will discuss in the talk, are fairness and explainability. We will discuss these considerations from the viewpoint of a large healthcare organization that uses prediction models ubiquity on a daily basis. We will also describe how academic collaborations can expand our toolbox for handling these issues in practice.