What's Happening?
A study conducted by researchers at the Massachusetts Institute of Technology (MIT) has revealed that the rapid advancements in large language models (LLMs) are primarily driven by access to extensive computing power rather than proprietary techniques.
The research analyzed 809 LLMs released between October 2022 and March 2025, focusing on factors contributing to AI capability improvements. The study identified four components influencing progress: the amount of training compute used, shared algorithmic progress, developer-specific techniques, and model-specific design choices. The findings suggest that while company-specific advantages exist, they have a limited impact on the most advanced models. Instead, the dominant factor is the scale of computing resources used to train these models, with 80-90% of frontier model performance attributed to large-scale compute.
Why It's Important?
The study's findings highlight the critical role of computing infrastructure in the development of advanced AI systems. As AI models continue to evolve, the availability of advanced chips and data-center capacity could become decisive factors in shaping the future of artificial intelligence. This shift emphasizes the importance of investing in computing resources to maintain a competitive edge in AI development. The research also underscores the need for efficiency improvements, as shared algorithmic progress has significantly enhanced compute efficiency, allowing developers to achieve similar performance with less training compute. This trend could influence how companies allocate resources and prioritize investments in AI technology.
What's Next?
If the trend identified in the study continues, the global race to build the most advanced AI systems may increasingly depend on access to large-scale computing infrastructure. Companies may need to focus on expanding their data-center capacities and acquiring advanced chips to remain competitive. Additionally, the study suggests that while engineering improvements and algorithmic innovations are important, they are overshadowed by the dramatic increase in computing power. This could lead to a reevaluation of strategies in AI development, with a greater emphasis on scaling computing resources.









