AI Scaling Race Faces a Bold New Challenger, Sara Hooker

AI Scaling Race Faces a Bold New Challenger, Sara Hooker AI Scaling Race Faces a Bold New Challenger, Sara Hooker
IMAGE CRRDITS: COHERE

AI companies are spending billions to build data centers the size of small cities. Each new facility is part of a global AI scaling race, driven by the belief that more computing power will eventually produce super intelligent systems capable of doing anything humans can.

But a growing number of researchers say this approach may have already peaked. Among them is Sara Hooker, former VP of AI research at Cohere and a Google Brain alumna. She recently launched a new startup called Adaption Labs, built around a different idea: that scaling large language models has become an inefficient way to improve artificial intelligence.

Hooker co-founded Adaption Labs with Sudip Roy, another veteran of Cohere and Google. Their company is developing AI systems that can continuously adapt and learn from real-world experiences. The goal is to make machines that improve themselves over time without requiring endless compute power or retraining.

“The formula of just scaling these models, scaling-pilled approaches, which are attractive but extremely boring, hasn’t produced intelligence that can navigate or interact with the world,” Hooker said. She believes that real learning comes from adaptation. When a person stubs their toe on a table, they learn to walk more carefully next time. Machines, she argues, should be able to do the same.

AI labs have tried to capture this process through reinforcement learning, where systems learn from trial and error in controlled environments. But once deployed in the real world, these models don’t keep learning. They repeat the same mistakes, never improving from experience. Hooker wants to change that.

Some major labs, like OpenAI, offer expensive consulting services to help companies fine-tune their models. Reports suggest that clients must spend upwards of $10 million to access those services. Hooker sees this as wasteful and believes AI can learn from its environment far more efficiently. She wants to prove that adaptive systems can match or exceed today’s largest models at a fraction of the cost, reshaping who gets to build and benefit from advanced AI.

Adaption Labs’ approach stands in contrast to the current industry obsession with scaling. A recent MIT study found that the world’s largest AI models are already showing diminishing returns. Even leading voices in AI have begun questioning whether size alone can drive progress. Richard Sutton, known as the father of reinforcement learning, recently said that LLMs cannot truly scale because they don’t learn from real-world experience. Andrej Karpathy, one of OpenAI’s earliest engineers, also expressed doubts about reinforcement learning’s long-term potential.

These warnings echo concerns that first surfaced in late 2024, when researchers began to suspect that pretraining scaling, once the secret behind OpenAI and Google’s breakthroughs, was hitting its limits. The data now supports those fears. Yet rather than slow down, major labs have shifted their focus to reasoning models and large-scale reinforcement learning. Both require enormous computing resources, making the AI scaling race even more expensive.

OpenAI’s new reasoning model, o1, was built because researchers believed it would “scale up well.” Meta and Periodic Labs have also explored massive reinforcement learning projects, including one study that cost more than $4 million. For Hooker, this is a sign that the industry is chasing scale instead of substance. Adaption Labs aims to prove there’s a smarter, more efficient path forward.

The company has already raised between $20 million and $40 million in seed funding, according to investors familiar with its pitch decks. Hooker declined to confirm the final amount but described the startup as “very ambitious.”

Before founding Adaption Labs, Hooker led Cohere Labs, where she built small AI models designed for enterprise applications. These compact systems often outperform much larger models on tasks such as coding, math, and reasoning. She believes that trend will continue as efficiency and adaptability take center stage in AI research.

Hooker is also known for her work in expanding global access to AI research. At Cohere, she recruited talent from underrepresented regions like Africa, helping diversify the field. Although Adaption Labs plans to open a San Francisco office, she says hiring will remain global.

If Hooker and her team are right, the implications could be massive. The industry has already poured billions into the AI scaling race, assuming that bigger models will eventually lead to general intelligence. But true progress might come from building adaptive AI systems that learn directly from experience, systems that are not just powerful but profoundly efficient.

The next major AI breakthrough may not depend on building bigger models at all. It might come from teaching machines how to learn the way humans do, by adapting to the world around them.