This session will introduce the grad student seminar series, include an interactive session focusing on thoughts and ideas for effective preparation and presentation, and will gauge students' interest about upcoming science outreach activities.
Numerical simulations of collision events within the ATLAS experiment have played a pivotal role in shaping the design of future experiments and analyzing ongoing ones. However, the quest for accuracy in describing Large Hadron Collider (LHC) collisions comes at an imposing computational cost, with projections estimating the need for millions of CPU-years annually during the High Luminosity LHC (HL-LHC) run. Simulating a single LHC event with Geant4 currently devours around 1000 CPU seconds, with calorimeter simulations imposing substantial computational demands. To address this challenge, we propose a Quantum-Assisted deep generative model. Our model marries a variational autoencoder (VAE) on the exterior with a Restricted Boltzmann Machine (RBM) in the latent space, delivering enhanced expressiveness compared to conventional VAEs. The RBM nodes and connections are meticulously engineered to enable the use of qubits and couplers on D-Wave's Pegasus Quantum Annealer. We also provide preliminary insights into the requisite infrastructure for large-scale deployment.
How did the universe begin? How did it evolve to what we see now?
There was a time when few people believed such questions could even be posed in scientific terms. Now, as increasingly precise instruments deliver their treasure trove of data, the answers may be within reach.
On Wednesday, October 25, Perimeter Director Emeritus Neil Turok will tackle this intriguing topic in a Perimeter Institute Public Lecture, “Secrets of the Universe: Hiding in Plain Sight?”
The performance of neural networks like large language models (LLMs) is governed by "scaling laws": the error of the network, averaged across the whole dataset, drops as a power law in the number of network parameters and the amount of data the network was trained on. While the mean error drops smoothly and predictably, scaled up LLMs seem to have qualitatively different (emergent) capabilities than smaller versions when one evaluates them at specific tasks. So how does scaling change what neural networks learn? We propose the "quantization model" of neural scaling, where smooth power laws in mean loss are understood as averaging over many small discrete jumps in network performance. Inspired by Max Planck's assumption in 1900 that energy is quantized, we make the assumption that the knowledge or skills that networks must learn are quantized, coming in discrete chunks which we call "quanta". In our model, neural networks can be understand as being implicitly a large number of modules, and scaling simply adds modules to the network. In this talk, I will discuss evidence for and against this hypothesis, its implications for interpretability and for further scaling, and how it fits in with a broader vision for a "science of deep learning".
Microbes get sick too. These viral infections transform the fates of microbial cells, populations, and communities. The infection and lysis of individual microbes releases new virus particles and redirects carbon and nutrients back into the environment. Yet, there is increasing evidence that the ecological outcome of infections is often nuanced and includes a spectrum of fates beyond rapid infection and lysis. This talk combines insights from mathematical models, in vivo experiments, and field data to explore how inefficient infection and non-lytic outcomes shape the use of phage as therapeutics and the impact of phage on marine ecosystems.