ICTS:32497

Mean-Field Theory Insights into Neural Feature Dynamics, Infinite-Scale Limits, and Scaling Laws

APA

(2025). Mean-Field Theory Insights into Neural Feature Dynamics, Infinite-Scale Limits, and Scaling Laws. SciVideos. https://scivideos.org/icts-tifr/32497

MLA

Mean-Field Theory Insights into Neural Feature Dynamics, Infinite-Scale Limits, and Scaling Laws. SciVideos, Aug. 12, 2025, https://scivideos.org/icts-tifr/32497

BibTex

          @misc{ scivideos_ICTS:32497,
            doi = {},
            url = {https://scivideos.org/icts-tifr/32497},
            author = {},
            keywords = {},
            language = {en},
            title = {Mean-Field Theory Insights into Neural Feature Dynamics, Infinite-Scale Limits, and Scaling Laws},
            publisher = {},
            year = {2025},
            month = {aug},
            note = {ICTS:32497 see, \url{https://scivideos.org/icts-tifr/32497}}
          }
          
Cengiz Pehlevan
Talk numberICTS:32497
Source RepositoryICTS-TIFR

Abstract

When a neural network becomes extremely wide or deep, its learning dynamics simplify and can be described by the same “mean-field” ideas that explain magnetism and fluids. I will walk through these ideas step-by-step, showing how they suggest practical recipes for initialization and optimization that scale smoothly from small models to cutting-edge transformers. I will also discuss neural scaling laws—empirical power-law rules that relate model size, data, and compute—and illustrate them with solvable toy models.