Scaling AI: From massive models to specialized solutions

View profile for Ram Devanathan

Data-driven leader advancing US leadership in energy technologies, smart manufacturing, and national security. Opinions here are my own and do not represent my employer.

Are we at a scaling plateau in AI? Following rapid, headline-grabbing progress in large language models (LLMs), we’re now seeing diminishing returns from simply making models bigger. This trend, coupled with concerns about scarcity of high-quality data and the risk of "model collapse" from training on synthetic data, suggests that the 'one gigantic model to rule them all' approach may not be the future. The real opportunity is shifting. Instead of chasing ever-larger general models, the focus is moving toward smaller, task-specific models that are not resource-hungry in terms of compute resources, energy, and water. This shift is a return to what truly matters: deep domain expertise. In science and engineering, it’s time to move past the hype of massive acceleration, such as 100 times faster materials discovery! The real work ahead involves: Reasoning: Developing new approaches to reasoning. Humans-in-the-Loop: Leveraging human expertise to guide AI in tackling complex problems and messy workflows. Data Curation: Creating and sharing high-quality, domain-specific datasets. Workflow Integration: Embedding AI solutions into our existing scientific processes to augment human creativity. This is a powerful moment for our field. Our specialized knowledge is critical to unlock AI's true potential. What specific problems in your field do you believe are ripe for this approach? #AI #GenerativeAI #GenAI #datascience #engineering #innovation

Sergei Kalinin

Weston Fulton chair professor, University of Tennessee, Knoxville

1mo

+1. Ideally, we need both more data (obvious), but far more importantly ways to learn more from even extant data, and plan experiments.

To view or add a comment, sign in

Explore content categories