Abstract
Consider longitudinal networks whose edges turn on and off according to a discrete-time Markov chain with exponential-family transition probabilities. We characterize when their joint distributions are also exponential families with the same parameter, and show that the permutation-uniform subclass of these chains permit interpretation as an independent, identically distributed sequence on the same state space. We apply these ideas to temporal exponential random graph models, for which permutation uniformity is well suited, and discuss mean-parameter convergence, dyadic independence, and exchangeability. The framework facilitates applying standard tools to longitudinal-network Markov chains from either asymptotics or single-observation exponential random graph models.
The latter are often in log-linear form, allowing us to frame the problem of testing model fit through an exact conditional test whose p-value can be approximated efficiently in networks of both small and moderately large sizes. An important extension of this theory is to latent-variable blockmodels, an application which will be briefly discussed.
This talk is based on joint work with William K. Schwartz, Hemanshu Kaul, Despina Stasi, Elizabeth Gross, Debdeep Pati, and Vishesh Karwa.
Abstract
Many of today’s most promising technological systems involve very large numbers of autonomous agents that influence each other and make strategic decisions within a network structure. Examples include opinion dynamics, targeted marketing in social networks, economic exchange and international trade, product adoption and social contagion. While traditional tools for the analysis of these systems assumed that a social planner has full knowledge of the underlying game, when we turn to very large networks two issues emerge. First, collecting data about the exact network of interactions becomes very expensive or not at all possible because of privacy concerns. Second, methods for designing optimal interventions that rely on the exact network structure typically do not scale well with the population size.
To obviate these issues, in this talk I will consider a framework in which the social planner designs interventions based on probabilistic instead of exact information about agent’s interactions. I will introduce the tool of “graphon games” as a way to formally describe strategic interactions in this setting and I will illustrate how this tool can be exploited to design asymptotically optimal interventions for general classes of network games, beyond the linear quadratic setting.
Abstract
An embedded vertexon sequence is the sequence of vertex sets of a sequence of graphs in a connected compact set $M$ in $R^{m}$ together with an asymptotically dense partition hierarchy of $M$. Vertexon sequences and the associated graph sequences have subsequential limit measures in $M$ and $M^2$ respectively, independent of the partition hierarchy within a large class. The differentiation of functions on vertexon limits with open support in $M,$ and certain directional derivatives in the support of the associated graphon measure in $M^2,$ may be defined. Consequently, subject to conditions, one may seek stationary node dependent Nash values for Embedded Graphon Mean Field Games (EGMFGs). An example will be given in the case of a class of Linear Quadratic Gaussian EGMFGs.\\
Work with Rinel Foguen-Tschuendom, Shuang Gao, Minyi Huang
Abstract
Consider the subgraph sampling model, where we observe a random subgraph of a given (possibly non random) large graph $G_n$, by choosing vertices of $G_n$ independently at random with probability $p_n$. In this setting, we study the question of estimating the number of copies $N(H,G_n)$ of a fixed motif/small graph (think of $H$ as edges, two stars, triangles) in the big graph $G_n$. We derive necessary and sufficient conditions for the consistency and the asymptotic normality of a natural Horvitz-Thompson (HT) type estimator.
As it turns out, the asymptotic normality of the HT estimator exhibits an interesting fourth-moment phenomenon, which asserts that the HT estimator (appropriately centered and rescaled) converges in distribution to the standard normal whenever its fourth-moment converges to 3. We apply our results to several natural graph ensembles, such as sparse graphs with bounded degree, Erdős-Renyi random graphs, random regular graphs, and dense graphons.
This talk is based on joint work with Bhaswar B. Bhattacharya and Sayan Das.
Abstract
Cographs are by definition $P_4$-free graphs, i.e. graphs avoiding the path $P_4$ as induced subgraph. In this talk, we will consider a uniform random cograph with $n$ vertices, for large $n$. We shall describe the (random) graphon limit of this object, which is constructed using a Brownian excursion. Motivated by some probabilistic work around Erdős-Hajnal conjecture, we also consider large independent sets in uniform cographs. For both aspects, cographs behave differently from most other $H$-free random graphs.
Based on joint work with F. Bassino, M. Bouvel, M. Drmota, L. Gerin, M. Maazoun and A. Pierrot.
Abstract
We will present out attempts thus far to develop a theory of graphon-valued stochastic processes. We will present a brief review of theory Graphons and dynamics constructed on the space of graphons. We shall construct and analyse a natural class of such processes arising from population genetics. In conclusion we shall present the challenges in our ongoing work on constructing dynamics where the edges and vertices interact with each other. This is joint work with Frank den Hollander and Adrian Roellin
Abstract
In this talk, I will present the multivariate normal approximation for the centered subgraph counts in dense random graphs. The main motivation to investigate these statistics is the fact that they are key to understanding fluctuations of regular subgraph counts since they act as an orthogonal basis of a corresponding L2 space. We also identify the resulting limiting Gaussian stochastic measures by means of the theory of generalised U-statistics and Gaussian Hilbert spaces, which we think is a suitable framework to describe and understand higher-order fluctuations in dense random graph models.
This is joint work with Adrian Roellin.
Abstract
For any given graph $H$, one may define a natural corresponding functional $\|.\|_H$ for real(or complex) valued functions by using the homomorphism density of $H$. We say that $H$ is \emph{norming} if $\|.\|_H$ is a norm. This generalises the Gowers octahedral norms, a widely used tool in extremal combinatorics to quantify quasirandomness.
In 2008, Lov\'{a}sz asked what graphs are norming. This has been a central question in the theory of dense graph limits and also connects to Sidorenko's conjecture, a major open problem in extremal graph theory.
For the recent few years, there have been interesting developments on the Lovász question, where algebraic combinatorics, extremal graph theory and functional analysis meet. In this talk, I will discuss some of the progress.
Based on joint work with David Conlon, Frederik Garbe, Jan Hladk\'{y}, Bjarne Sch\"{u}lke, and Sasha Sidorenko.
The degree-restricted random process is a natural algorithmic model for generating graphs with degree sequence D_n=(d_1,...,d_n): starting with an empty n-vertex graph, it sequentially adds new random edges so that the degree of each vertex v_i remains at most d_i.
It is natural to ask whether the final graph of this process is similar to a uniform random graph with degree sequence D_n (for d-regular degree sequences D_n this was already raised by Wormald in the mid 1990s).
We show that, for degree sequences D_n that are not nearly regular, the final graph of the degree-restricted random process differs substantially from a uniform random graph with degree sequence D_n.
Our proof uses the switching method, which is usually only applied to uniform random graph models -- rather than to stochastic processes.
Based on joint work with Mike Molloy and Erlang Surya.
Abstract
For a given graph H we are interested in the critical value of p so that a sample of an Erdos-Renyi graph contains a copy of H with high probability, Kahn and Kalai in 2006 conjectured that it should be given (up to a logarithm) by the minimum p so that in expectation all subgraphs H' of H appear in G(n,p).
In this work, we will present a proof of a modified version of this conjecture. Our proof is based on a powerful "spread lemma", which played a key role in the recent breakthroughs on the sunflower conjecture and the proof of the fractional Kahn-Kali conjecture. Time permitting, we will discuss a new proof of the spread lemma using Bayesian inference tools.
This is joint work with Elchanan Mossel, Jonathan Niles-Weed and Nike Sun
Abstract
It is well known that sparse random graphs (where the number of edges is of the order of the number of vertices) contain only a small number of triangles. On the other hand, many networks observed in real life are sparse but contain a large number of triangles. Motivated by this discrepancy we ask the following two questions: How (un)likely is it that a sparse random graph contains a large number of triangles? What does the graph look like when it contains a large number of triangles? We also ask a related question: What is the probability that in a sparse random graph a large number of vertices are part of some triangle? We discuss these questions, as well as some applications to exponential random graph models.