Optical astronomical imaging looks for better imaging quality in extreme cases of weak and subdiffraction limits. I focus on the quantum enhancement of astronomical interferometric imaging, including its fundamental limit and practical issues. For the fundamental aspects, I ignore any resource limit and noise and consider the ideal imaging problems. I show that the resolution limit can be enhanced with more carefully chosen measurement strategies and the general imaging quality can be enhanced by postprocessing the stellar photons with a quantum computer. For the practical aspects, I try to overcome the transmission loss suffered by interferometric imaging using quantum network, consider the possibility to implement a local scheme with better performance, and discuss the feasibility of decomposing thermal states into temporally localized pulses.
Instructions to join the fully virtual workshop session in the academic metaverse: https://immorlica.com/workshop.htm
**Recording Notice**
Once you enter Gathertown, you consent to being recorded. If do you do not wish to be recorded, you can:
Make yourself anonymous
Not enter the Gathertown space
Abstract
We study network games in which players choose both the partners with whom they associate and an action level (e.g., effort) that creates spillovers for those partners. We introduce a framework and two solution concepts, extending standard approaches for analyzing each choice in isolation: Nash equilibrium in actions and pairwise stability in links. Our main results show that, under suitable order conditions on incentives, stable networks take simple forms. The first condition concerns whether links create positive or negative payoff spillovers. The second concerns whether actions are strategic complements to links, or strategic substitutes. Together, these conditions yield a taxonomy of the relationship between network structure and economic primitives organized around two network architectures: ordered overlapping cliques and nested split graphs. We apply our model to understand the consequences of competition for status, to microfound matching models that assume clique formation, and to interpret empirical findings that highlight unintended consequences of group design.
Scrambling of quantum information is an important feature at the root of randomization and benchmarking protocols, the onset of quantum chaos, and black-hole physics.
Unscrambling this information is possible given perfect knowledge of the scrambler [ArXiv: 1710.03363].
We show that one can retrieve the scrambled information without any previous knowledge of the scrambler, by a learning algorithm that allows the building of an efficient decoder. Surprisingly, complex quantum scramblers admit Clifford decoders: the salient properties of a scrambling unitary can be efficiently described even if exponentially complex, as long as it is not fully chaotic. This is possible because all the redundant complexity can be described as an entropy, and for non-chaotic black holes can be efficiently pushed away, just like in a refrigerator. This entropy is not due to thermal fluctuations but to the non-stabilizer behavior of the scrambler.
Instructions to join the fully virtual workshop session in the academic metaverse: https://immorlica.com/workshop.htm
**Recording Notice**
Once you enter Gathertown, you consent to being recorded. If do you do not wish to be recorded, you can:
Make yourself anonymous
Not enter the Gathertown space
In the holographic approach to quantum gravity, quantum information theory plays a fundamental role in understanding how semiclassical gravity emerges from the microscopic description. The map (sometimes called the dictionary) between these two descriptions has the structure of a quantum error correcting code. In the context of an evaporating black hole, this code can be arbitrarily far from an isometry. Such codes are novel from a quantum information standpoint, and their properties are not yet well understood. I will describe a simple toy model of an evaporating black hole which allows for an explicit construction of the dictionary using the Euclidean gravity path integral. I will also describe the sense in which this dictionary is a non-isometric code, explain its basic properties, and comment on implications for semiclassical physics in the black hole interior.
We examine how well someone learns when information from original sources only reaches them after repeated person-to-person noisy relay. We characterize how many independent chains a learner needs to access in order to accurately learn, as these chains grow long. In the presence of random mutation of message content and trans- mission failures, there is a sharp threshold such that a receiver fully learns if they have access to more chains than the threshold number, and learn nothing if they have fewer. Moreover, we show that as the distance to primary sources grows, all learning comes from either the frequency or content of received messages, so learning only from the more informative dimension is equivalent to full Bayesian learning. However, even slight uncertainty over the relative rates of mutations makes learning from long chains impossible, no matter how many distinct sources information trickles down from. This suggests that forces which lengthen chains of communication can severely disrupt social learning, even if they increase the frequency of communication.
This paper introduces a simple model of contemporary information markets: Consumers prefer high-quality information, judiciously sharing stories and posts. High-quality stories are costly to produce, and overall quality is endogenous. When suppliers' payoffs derive from how many consumers view their stories, quality is highest when social connectedness is neither too high nor too low. Third-party misinformation can increase high-quality output, since consumers share more judiciously. In highly-connected markets, low-quality stories are widely seen and dominate. However, when suppliers' payoffs derive solely on consumer actions (e.g, votes or purchases) based on their stories and consumers are highly connected, consumers perfectly infer quality and quality is highest.
(This work is joint with Krishna Dasaratha.) We study learning on social media with an equilibrium model of users interacting with shared news stories. Rational users arrive sequentially and each observes an original story (i.e., a private signal) and a sample of predecessors' stories in a news feed, then decides which stories to share. The observed sample of stories depends on what predecessors share as well as the sampling algorithm, which represents a design choice of the platform. We focus on how much the algorithm relies on virality (how many times a story has been previously shared) when generating news feeds. Showing users more viral stories can increase information aggregation, but it can also generate steady states where most shared stories are wrong. Such misleading steady states self-perpetuate, as users who observe these wrong stories develop wrong beliefs, and thus rationally continue to share them. We find that these bad steady states appear discontinuously, and even a benevolent platform designer either accepts these misleading steady states or induces fragile learning outcomes in the optimal design.
This course uses quantum electrodynamics (QED) as a vehicle for covering several more advanced topics within quantum field theory, and so is aimed at graduate students that already have had an introductory course on quantum field theory. Among the topics hoped to be covered are: gauge invariance for massless spin-1 particles from special relativity and quantum mechanics; Ward identities; photon scattering and loops; UV and IR divergences and why they are handled differently; effective theories and the renormalization group; anomalies.
We revamp the constructive enumeration of 1/16-BPS states in the maximally supersymmetric Yang-Mills in four dimensions, and search for ones that are not of multi-graviton form. A handful of such states are found for gauge group SU(2) at relatively high energies, resolving a decade-old enigma. Along the way, we clarify various subtleties in the literature, and prove a non-renormalization theorem about the exactness of the cohomological enumeration in perturbation theory. We point out a giant-graviton-like feature in our results, and envision that a deep analysis of our data will elucidate the fundamental properties of black hole microstates.