Abstract
The dynamics of an infectious disease is usually approached at the population scale. However, the event of an epidemic outbreak depends on the existence of an active infection at the level of the individual. A full study of the interaction between the infection of hosts and its transmission in the population requires the incorporation of many factors such as physiological age, age of infection, risk conditions, contact structures and other variables involving different spatial and temporal scales. Nevertheless, simple models can still give some insight on the intricate mechanisms of interactions necessary for the occurrence of an epidemic outbreak, in particular, one can explore the role that the reproductive numbers at the between-host and within-host levels play. In this talk I will review some results on the epidemiology of between-host, within host interactions.
Abstract
We will discuss some adaptations of the standard epidemic models to incorporate various kinds of restrictions on the interaction between susceptible and infected individuals and study the effect of such restrictions on the prognosis of an epidemic. In one case, we study the effect of avoiding known infected neighbors on the persistence of a recurring infection process. In another case, we develop a flexible mathematical framework for pool-testing and badging protocol in the context of controlling contagious epidemics and tackling the far-reaching associated challenges, including understanding and evaluating individual and collective risks of returning prior infected individuals to normal society and other economic and social arrangements and interventions to protect against disease.
Abstract
The novel coronavirus that emerged in December 2019, COVID-19, is the greatest public health challenge humans have faced since the 1918 influenza pandemic (it has so far caused over 615 million confirmed cases and 6.5 million deaths). In this talk, I will present some mathematical models for assessing the population-level impact of the various intervention strategies (pharmaceutical and non-pharmaceutical) being used to control and mitigate the burden of the pandemic. Continued human interference with the natural ecosystems, such as through anthropogenic climate change, environmental degradation, and land use changes, make us increasingly vulnerable to the emergence, re-emergence and resurgence of infectious diseases (particularly respiratory pathogens with pandemic potential). I will discuss some of the lessons learned from our COVID-19 modeling studies and propose ways to mitigate the next respiratory pandemic.
Abstract
Throughout the coronavirus disease 2019 (COVID-19) pandemic, countries have relied on a variety of ad hoc border control protocols to allow for non-essential travel while safeguarding public health, from quarantining all travellers to restricting entry from select nations on the basis of population-level epidemiological metrics such as cases, deaths or testing positivity rates. Here we report the design and performance of a reinforcement learning system, nicknamed Eva. In the summer of 2020, Eva was deployed across all Greek borders to limit the influx of asymptomatic travellers infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and to inform border policies through real-time estimates of COVID-19 prevalence. In contrast to country-wide protocols, Eva allocated Greece’s limited testing resources on the basis of incoming travellers’ demographic information and testing results from previous travellers. By comparing Eva’s performance against modelled counterfactual scenarios, we show that Eva identified 1.85 times as many asymptomatic, infected travellers as random surveillance testing, with up to 2–4 times as many during peak travel, and 1.25–1.45 times as many asymptomatic, infected travellers as testing policies that utilize only epidemiological metrics. We demonstrate that this latter benefit arises, at least partially, because population-level epidemiological metrics had limited predictive value for the actual prevalence of SARS-CoV-2 among asymptomatic travellers and exhibited strong country-specific idiosyncrasies in the summer of 2020. Our results raise serious concerns on the effectiveness of country-agnostic internationally proposed border control policies that are based on population-level epidemiological metrics. Instead, our work represents a successful example of the potential of reinforcement learning and real-time data for safeguarding public health. Joint work with Kimon Drakopoulos, Vishal Gupta, Ioannis Vlachogiannis, Christos Hadjichristodoulou, Pagona Lagiou, Gkikas Magiorkinis, Dimitrios Paraskevis and Sotirios Tsiodras.
Abstract
Motivated by the discovery of hard-to-find social networks (such as MSM or A natural and well-known way to dPWIDs) or by finding contact-tracing strategies, we consider the question of exploring the topology of random structures (such as a random graph G) by random walks. The usual random walk jumps from a vertex of G to a neighboring vertex, providing information on the connected components of the graph G. The number of these connected components is the Betti number beta0. To gather further information on the higher Betti numbers that describe the topology of the graph, we can consider the simplicial complex C associated to the graph G: a k-simplex (edge for k=1, triangle for k=2, tetrahedron for k=3 etc.) belongs to C if all the lower (k-1)-simplices that constitute it also belong to the C. For example, a triangle belongs to C if its three edges are in the graph G. Several random walks have already been propose recently to explore these structures, mostly in Informatics Theory. We propose a new random walk, whose generator is related to a Laplacian of higher order of the graph, and to the Betti number betak. A rescaling of the walk for k=2 (cycle-valued random walk) is also detailed when the random walk visits a regular triangulation of the torus. We embed the space of chains into spaces of currents to establish the limiting theorem.
Joint work with T. Bonis, L. Decreusefond and Z. Zhang.
Abstract
We study a non-Markovian individual-based stochastic spatial epidemic model where the number of locations and the number of individuals at each location both grow to infinity while satisfying certain growth condition.
Each individual is associated with a random infectivity function, which depends on the age of infection.
The rate of infection at each location takes an averaging effect of infectivity from all the locations.
The epidemic dynamics in each location is described by the total force of infection, the number of susceptible individuals,
the number of infected individuals that are infected at each time and have been infected for a certain amount of time, as well as the number of recovered individuals. The processes can be described using a time-space representation.
We prove a functional law of large numbers for these time-space processes, and in the limit, we obtain a set of time-space integral equations together with the limit of the number of infected individuals tracking the age of infection as a time-age-space integral equation.
Joint work with G. Pang (Rice Univ)
Abstract
People's interaction networks play a critical role in epidemics. However, precise mapping of the network structure is often expensive or even impossible. I will show that it is unnecessary to map the entire network. Instead, contact tracing a few samples from the population is enough to estimate an outbreak's likelihood and size.
More precisely, I start by studying a simple epidemic model where one node is initially infected, and an infected node transmits the disease to its neighbors independently with probability p. In this model, I will present a nonparametric estimator on the likelihood of an outbreak based on local graph features and give theoretical guarantees on the estimator's accuracy for a large class of networks. Finally, I will extend the result to the general SIR model with random recovery time: Local graph features are enough to predict the time evolution of epidemics on a large class of networks.
Abstract
Contact tracing, either manual by questioning diagnosed individuals for recent contacts or by an App keeping track of close contacts, is one out of many measures to reduce spreading. Mathematically this is hard to analyse because future infections of an individual are no longer independent of earlier contacts. In the talk I will describe a stochastic model for a simplified situation, allowing for both manual and digital contact tracing, for which it is possible to obtain results for the initial phase of the epidemic, with focus on the effective reproduction number $R_E$ which determines if contact tracing will prevent an outbreak or not. (Joint work with Dongni Zhang)
Making use of modern black-box AI tools such as deep reinforcement learning is potentially transformational for safety-critical systems such as data centers, the electricity grid, transportation, and beyond. However, such machine-learned algorithms typically do not have formal guarantees on their worst-case performance, stability, or safety and are typically difficult to make use of in distributed, networked settings. So, while their performance may improve upon traditional approaches in “typical” cases, they may perform arbitrarily worse in scenarios where the training examples are not representative due to, e.g., distribution shift, or in situations where global information is unavailable to local controllers. These represent significant drawbacks when considering the use of AI tools in safety-critical networked systems. Thus, a challenging open question emerges: Is it possible to provide guarantees that allow black-box AI tools to be used in safety-critical applications? In this talk, I will provide an overview of a variety of projects from my group that seek to develop robust and localizable tools combining model-free and model-based approaches to yield AI tools with formal guarantees on performance, stability, safety, and sample complexity.
A recent line of work has established uncoupled learning dynamics such that, when employed by all players in a game, each player's regret after T repetitions grows polylogarithmically in T, an exponential improvement over the traditional guarantees within the no-regret framework. However, so far these results have only been limited to certain classes of games with structured strategy spaces---such as normal-form and extensive-form games. The question as to whether O(polylog T) regret bounds can be obtained for general convex and compact strategy sets---which occur in many fundamental models in economics and multiagent systems---while retaining efficient strategy updates is an important question. In this talk, we answer this in the positive by establishing the first uncoupled learning algorithm with O(log T) per-player regret in general convex games, that is, games with concave utility functions supported on arbitrary convex and compact strategy sets. Our learning dynamics are based on an instantiation of optimistic follow-the-regularized-leader over an appropriately lifted space using a self-concordant regularizer that is, peculiarly, not a barrier for the feasible region. Further, our learning dynamics are efficiently implementable given access to a proximal oracle for the convex strategy set, leading to O(loglog T) per-iteration complexity; we also give extensions when access to only a linear optimization oracle is assumed. Finally, we adapt our dynamics to guarantee O(sqrt(T)) regret in the adversarial regime. Even in those special cases where prior results apply, our algorithm improves over the state-of-the-art regret bounds either in terms of the dependence on the number of iterations or on the dimension of the strategy sets. Based on joint work with Ioannis Anagnostides, Haipeng Luo, Chung-Wei Lee, Christian Kroer, and Tuomas Sandholm. Paper link: https://arxiv.org/abs/2206.08742
A central problem in machine learning is as follows: How should we train models using data generated from a collection of clients/environments, if we know that these models will be deployed in a new and unseen environment? In the setting of few-shot learning, two prominent approaches are: (a) develop a modeling framework that is “primed” to adapt, such as Model Adaptive Meta Learning (MAML), or (b) develop a common model using federated learning (such as FedAvg), and then fine tune the model for the deployment environment. We study both these approaches in the multi-task linear representation setting. We show that the reason behind generalizability of the models in new environments trained through either of these approaches is that the dynamics of training induces the models to evolve toward the common data representation among the clients’ tasks. In both cases, the structure of the bi-level update at each iteration (an inner and outer update with MAML, and a local and global update with FedAvg) holds the key — the diversity among client data distributions are exploited via inner/local updates, and induces the outer/global updates to bring the representation closer to the ground-truth. In both these settings, these are the first results that formally show representation learning, and derive exponentially fast convergence to the ground-truth representation. Based on joint work with Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sewoong Oh. Papers: https://arxiv.org/abs/2202.03483 , https://arxiv.org/abs/2205.13692
In several applications of real-time matching of demand to supply in online marketplaces --- for example matching delivery requests to dispatching centers in Amazon or allocating video-ads to users in YouTube --- the platform allows for some latency (or there is an inherent allowance for latency) in order to batch the demand and improve the efficiency of the resulting matching. Motivated by these scenarios, I investigate the optimal trade-off between batching and inefficiency in the context of designing robust online allocations in this talk. In particular, I consider K-stage variants of the classic vertex weighted bipartite b-matching and AdWords problems in the adversarial setting, where online vertices arrive stage-wise and in K batches—in contrast to online arrival. Our main result for both problems is an optimal (1-(1-1/K)^K)-competitive competitive (fractional) matching algorithm, improving the classic (1 − 1/e) competitive ratio bound known for the online variants of these problems (Mehta et al., 2007; Aggarwal et al., 2011). Our main technique at high-level is developing algorithmic tools to vary the trade-off between “greedyness” and “hedging” of the matching algorithm across stages. We rely on a particular family of convex-programming based matchings that distribute the demand in a specifically balanced way among supply in different stages, while carefully modifying the balancedness of the resulting matching across stages. More precisely, we identify a sequence of polynomials with decreasing degrees to be used as strictly concave regularizers of the maximum weight matching linear program to form these convex programs. By providing structural decomposition of the underlying graph using the optimal solutions of these convex programs and recursively connecting the regularizers together, we develop a new multi-stage primal-dual framework to analyze the competitive ratio of this algorithm. We extend our results to integral allocations in the vertex weighted b-matching problem with large budgets, and in the AdWords problem with small bid over budget ratios. I will also briefly mention a recent extension of these results to the multi-stage configuration allocation problem and its applications to video-ads. The talk is a based on the following two working papers: (i) "Batching and Optimal Multi-stage Bipartite Allocations", https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3689448 (ii) "Optimal Multi-stage Configuration Allocation with Applications to Video Advertising" https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3924687