From LLMs to LRMs: The Rise of Reasoning Models
APA
(2025). From LLMs to LRMs: The Rise of Reasoning Models. SciVideos. https://scivideos.org/index.php/icts-tifr/33035
MLA
From LLMs to LRMs: The Rise of Reasoning Models. SciVideos, Oct. 13, 2025, https://scivideos.org/index.php/icts-tifr/33035
BibTex
@misc{ scivideos_ICTS:33035,
doi = {},
url = {https://scivideos.org/index.php/icts-tifr/33035},
author = {},
keywords = {},
language = {en},
title = {From LLMs to LRMs: The Rise of Reasoning Models},
publisher = {},
year = {2025},
month = {oct},
note = {ICTS:33035 see, \url{https://scivideos.org/index.php/icts-tifr/33035}}
}
Abstract
When large language models became the dominant machine learning paradigm in 2002, their performance surprised almost everyone, including many experts. LLMs showed “emergent” behavior—bigger models could do tasks that identically-trained smaller models had failed at. Empirical scaling laws suggested that models would get predictably better with increasing model size, more training data and more training-time compute. But these laws began to saturate, only for a new scaling regime to enter the picture. In this talk, Anil Ananthaswamy will chart the ongoing transition to so-called large reasoning models, which use more compute during inference and ostensibly “think” and “reason” before answering, by using extra compute to explore multiple pathways to the final answer.