With the release of the third gravitational wave transient catalog (GWTC-3), the LIGO and Virgo detectors have reported nearly 100 gravitational waves from colliding black holes and neutron stars. Among these detections there have been numerous surprises, such as the heavy GW190521, the confidently asymmetric GW190412, and the exceptionally small secondary of GW190814. In addition to analyses of each individual sources' properties, such as their masses and spins, one can also summarize the collective properties of the colliding objects as population probability distributions over these parameters. As catalog sizes continue to grow, it enables both finer grained investigations into the population properties of merging compact objects, and robustly testing GR in the strong gravity regime. In this talk I will present data driven statistical models to look for deviations to underlying theoretical expectations, both for individual gravitational waveform models and population models describing the astrophysical distributions of merging compact binaries. I will present the results of an analysis using this novel data-driven model on the 11 compact binary mergers in GWTC-1, then move towards hierarchical models, inferring the binary black hole mass distribution with similar data-driven methods. I will conclude with showing new results from the LVK population analyses of GWTC-3 and motivate the need towards developing more data-driven statistical models for the incoming swath of observations expected in the fourth observing run that, as we have seen, will likely continue to further challenge theoretical expectations.
This session will in part focus on the challenge of unmeasured confounding and some select approaches for meeting this challenge, e.g., learning mixed graphical models. We will also discuss more “modern” methods for causal discovery including ones that exploit semiparametric assumptions to perform model selection.
This session will in part focus on the challenge of unmeasured confounding and some select approaches for meeting this challenge, e.g., learning mixed graphical models. We will also discuss more “modern” methods for causal discovery including ones that exploit semiparametric assumptions to perform model selection.
An overview of the classical strategies (constraint-based algorithms, score-based algorithms) in learning causal DAGs. Relevant graphical and statistical concepts will be discussed, including Markov equivalence, faithfulness, conditional independence testing, consistency of the BIC score for selection, and theoretical properties of methods such as the PC algorithm and GES algorithm.
Sociologists have interesting things to say about the practice of natural science. I will discuss the sociological phenomenon of multiples in scientific discoveries, with examples drawn from how the ΛCDM cosmology grew, and examples of possible multiple discoveries to come from issues arising in our present well-tested but certainly incomplete cosmology.
An overview of the classical strategies (constraint-based algorithms, score-based algorithms) in learning causal DAGs. Relevant graphical and statistical concepts will be discussed, including Markov equivalence, faithfulness, conditional independence testing, consistency of the BIC score for selection, and theoretical properties of methods such as the PC algorithm and GES algorithm.
This lecture will introduce Bayesian networks and their causal interpretation as causal graphical models, d-separation, the do-calculus, and the Shpitser-Pearl ID algorithm. We'll start by introducing Bayesian networks, causal graphical models, and interventions. We'll then show that two Bayesian networks with the same skeletons and v-structures represent the same conditional independence assumptions and prove that the d-separations present in any Bayesian network exhaust all the conditional independence assumptions that are guaranteed to hold in any distribution that factorizes according to that network. We then turn to causal models and introduce the do-calculus. We show the soundness of each of the rules of the do-calculus. Finally, we describe the Shpitser-Pearl algorithm for identifying causal effects in semi-Markovian models and, time-permitting, prove that the ID algorithm is complete for identifying causal effects; that is, a causal effect is identifiable if and only if the ID algorithm terminates successfully.
This lecture will introduce Bayesian networks and their causal interpretation as causal graphical models, d-separation, the do-calculus, and the Shpitser-Pearl ID algorithm. We'll start by introducing Bayesian networks, causal graphical models, and interventions. We'll then show that two Bayesian networks with the same skeletons and v-structures represent the same conditional independence assumptions and prove that the d-separations present in any Bayesian network exhaust all the conditional independence assumptions that are guaranteed to hold in any distribution that factorizes according to that network. We then turn to causal models and introduce the do-calculus. We show the soundness of each of the rules of the do-calculus. Finally, we describe the Shpitser-Pearl algorithm for identifying causal effects in semi-Markovian models and, time-permitting, prove that the ID algorithm is complete for identifying causal effects; that is, a causal effect is identifiable if and only if the ID algorithm terminates successfully.
This lecture will introduce Bayesian networks and their causal interpretation as causal graphical models, d-separation, the do-calculus, and the Shpitser-Pearl ID algorithm. We'll start by introducing Bayesian networks, causal graphical models, and interventions. We'll then show that two Bayesian networks with the same skeletons and v-structures represent the same conditional independence assumptions and prove that the d-separations present in any Bayesian network exhaust all the conditional independence assumptions that are guaranteed to hold in any distribution that factorizes according to that network. We then turn to causal models and introduce the do-calculus. We show the soundness of each of the rules of the do-calculus. Finally, we describe the Shpitser-Pearl algorithm for identifying causal effects in semi-Markovian models and, time-permitting, prove that the ID algorithm is complete for identifying causal effects; that is, a causal effect is identifiable if and only if the ID algorithm terminates successfully.