Panel featuring Kimon Drakopoulos (University of Southern California), Moon Duchin (Tufts University), Philip LeClerc (U.S. Census Bureau), Samir Shah (VolunteerMatch), Alex Teytelboym (University of Oxford); moderated by Vahideh Manshadi (Yale University).
This course uses quantum electrodynamics (QED) as a vehicle for covering several more advanced topics within quantum field theory, and so is aimed at graduate students that already have had an introductory course on quantum field theory. Among the topics hoped to be covered are: gauge invariance for massless spin-1 particles from special relativity and quantum mechanics; Ward identities; photon scattering and loops; UV and IR divergences and why they are handled differently; effective theories and the renormalization group; anomalies.
The U.S. Census Bureau adopted formally private methods to protect the principal products released based on the 2020 Decennial Census of Population and Housing. These include the Public Law 94-171 Redistricting Data Summary File (already released), the Demographic and Housing Characteristics File (DHC; in its final phase of privacy budget tuning), as well as the Detailed Demographic and Housing Characteristics File and Supplemental Demographic and Housing Characteristics File releases (in earlier phases of design, testing, and planning). Additional, smaller product releases based on the 2020 confidential data are also expected, with sub-state releases currently required to use differentially private methods. In this talk, I describe the design and a few of the major technical issues encountered in developing the TopDown algorithm (TDA), the principal formally private algorithm used to protect the PL94-171 release, and expected to be used to protect the DHC release. TDA was designed by a joint team of academic, contractor and government employees; I discuss the ways in which this collaboration worked, as well as what worked well and what was challenging, and briefly touch on the role of industry in algorithm design outside of TDA. I close with some general thoughts on ways to help form productive collaborations between academic, government, and industry expertise in formally private methods.
The supersymmetric index of N=4 SU(N) Super Yang-Mills is a well studied quantity. In 2104.13932, using the Bethe Ansatz approach, we analyzed some family of contributions to it. In the large N limit each term in this family has a holographic interpretation - it matches the contribution of a different Euclidean black hole to the partition function of the dual gravitational theory. By taking into account non-perturbative contributions (wrapped D3-branes, similar to Euclidean giant gravitons), we further showed a one to one match between the contributions of the gravitational saddles and this family of contributions to the index, both at the perturbative and non-perturbative levels. I'll end with newer results, concerning the form of these terms at finite N, new solutions to the Bethe Ansatz equations (i.e. additional contributions to the index beyond the ones described in that paper), and some ongoing effort to classify all the solutions to these equations.
Kerfuffle (/kərˈfəfəl/): a commotion or fuss, especially one caused by conflicting views. "There was a kerfuffle over the use of differential privacy for the 2020 Census." This talk will give a too-brief introduction to some of the issues that played out in tweets, court proceedings, and academic preprints. We'll also discuss approaches and challenges to understanding the effect of differential privacy on downstream policy.
With no hints of dark matter in the "classical WIMP" region of parameter space, experimentalists have begun searching in earnest for low mass (MeV-GeV scale) dark matter. However, efforts to probe this region of parameter space have been hindered by an unexpected and mysterious source of background events, dubbed the "low energy excess." Recently, mechanical stress has been shown to create a "low energy excess"-like source of events, and a microphysical picture of how stress creates this background is emerging. In addition to providing a path forward for low mass dark matter searches, these results may address several outstanding problems limiting the performance of superconducting quantum computers.
Assuming no particular background, I'll give a high-level introduction to the problem of electoral redistricting in the U.S. and the helpful and not-so-helpful ways that algorithmic district generation has intervened on law and policy.
This talk will set up the following talk, in which Aloni Cohen will talk about the panic about differential privacy in the redistricting data.
This course uses quantum electrodynamics (QED) as a vehicle for covering several more advanced topics within quantum field theory, and so is aimed at graduate students that already have had an introductory course on quantum field theory. Among the topics hoped to be covered are: gauge invariance for massless spin-1 particles from special relativity and quantum mechanics; Ward identities; photon scattering and loops; UV and IR divergences and why they are handled differently; effective theories and the renormalization group; anomalies.
Panel featuring Kimon Drakopoulos (University of Southern California), Moon Duchin (Tufts University), Philip LeClerc (U.S. Census Bureau), Samir Shah (VolunteerMatch), Alex Teytelboym (University of Oxford); moderated by Vahideh Manshadi (Yale University).
Vahideh Manshadi is an Associate Professor of Operations at Yale School of Management. She is also affiliated with the Yale Institute for Network Science, the Department of Statistics and Data Science, and the Cowles Foundation for Research in Economics. Her current research focuses on the operation of online and matching platforms in both the private and public sectors. Professor Manshadi serves on the editorial boards of Management Science, Operations Research, and Manufacturing & Service Operations Management. She received her Ph.D. in electrical engineering at Stanford University, where she also received MS degrees in statistics and electrical engineering. Before joining Yale, she was a postdoctoral scholar at the MIT Operations Research Center.
Alex Teytelboym is an Associate Professor at the Department of Economics, University of Oxford, a Tutorial Fellow at St. Catherine’s College, and a Senior Research Fellow at the Institute for New Economic Thinking at the Oxford Martin School. His research interests lie in market design and the economics of networks, as well as their applications to environmental economics and energy markets. His policy work has been on designing matching systems for refugee resettlement and environmental auctions. He is co-founder of Refugees.AI, an organization that is developing new technology for refugee resettlement.
Kimon Drakopoulos is the Robert R. Dockson Assistant Professor in Business Administration at the Data Sciences and Operations department at USC Marshall School of Business. His research focuses on the operations of complex networked systems, social networks, stochastic modeling, game theory and information economics. In 2020 he served as the Chief Data Scientist of the Greek National COVID-19 Scientific taskforce and a Data Science and Operations Advisor to the Greek Prime Minister. He has been awarded the Wagner Prize for Excellence in Applied Analytics and the Pierskalla Award for contributions to Healthcare Analytics.
Moon Duchin is a Professor of Mathematics at Tufts University, and runs the MGGG Redistricting Lab, an interdisciplinary research group at Tisch College of Civic Life of Tufts University. The lab's research program centers on Data For Democracy, bridging math, CS, geography, law, and policy to build models of elections and redistricting. She has worked to support commissions, legislatures, and other line-drawing bodies and has served as an expert witness in redistricting cases around the country.
Philip Leclerc is an operations research analyst working in the Center for Enterprise Dissemination-Disclosure Avoidance (CEDDA) at the U.S. Census Bureau. He graduated with a B.A. in mathematical economics and psychology from Christopher Newport University, and later completed his Ph.D. in Systems Modeling and Analysis at Virginia Commonwealth University. He joined the U.S. Census Bureau 6 years ago, where he first learned about differential privacy, and for the last 5 years has served as the internal scientific lead on the project for modernizing the disclosure avoidance system used in the first two major releases from the Decennial Census.
Samir Shah is Vice President, Partnerships & Customer Success at VolunteerMatch, where, for over a decade, he has contributed to a vision of developing the global digital volunteering backbone. Samir uses technology, networks, and data to empower volunteers, nonprofits, governments, companies, and brands to create value from VolunteerMatch’s products and services. He has negotiated complex partnerships with Fidelity, California Volunteers, Office of the Governor, and STEM Next, and manages trusted relationships with VolunteerMatch’s Open API Network of third party platform partners. Samir has a BA in Economics from the UT, Austin, a MA in Asian Studies from the UC, Berkeley, and an MBA from the Haas School of Business.
Deep learning algorithms that achieve state-of-the-art results on image and text recognition tasks tend to fit the entire training dataset (nearly) perfectly including mislabeled examples and outliers. This propensity to memorize seemingly useless data and the resulting large generalization gap have puzzled many practitioners and is not explained by existing theories of machine learning. We provide a simple conceptual explanation and a theoretical model demonstrating that memorization of outliers and mislabeled examples is necessary for achieving close-to-optimal generalization error when learning from long-tailed data distributions. Image and text data are known to follow such distributions and therefore our results establish a formal link between these empirical phenomena. We then demonstrate the utility of memorization and support our explanation empirically. These results rely on a new technique for efficiently estimating memorization and influence of training data points. Our results allow us to quantify the cost of limiting memorization in learning and explain the disparate effects that privacy and model compression have on different subgroups.
We study the concurrent composition properties of interactive differentially private mechanisms, whereby an adversary can arbitrarily interleave its queries to the different mechanisms. We prove that all composition theorems for non-interactive differentially private mechanisms extend to the concurrent composition of interactive differentially private mechanisms for all standard variants of differential privacy including $(\eps,\delta)$-DP with $\delta>0$, R\`enyi DP, and $f$-DP, thus answering the open question by \cite{vadhan2021concurrent}. For $f$-DP, which captures $(\eps,\delta)$-DP as a special case, we prove the concurrent composition theorems by showing that every interactive $f$-DP mechanism can be simulated by interactive post-processing of a non-interactive $f$-DP mechanism. For R\`enyi DP, we use a different approach by showing the optimal adversary against the concurrent composition can be decomposed as a product of the optimal adversaries against each interactive mechanism.
Panel featuring Kimon Drakopoulos (University of Southern California), Moon Duchin (Tufts University), Philip LeClerc (U.S. Census Bureau), Samir Shah (VolunteerMatch), Alex Teytelboym (University of Oxford); moderated by Vahideh Manshadi (Yale University).