In a major step toward building AI that continually learns and improves itself, Google researchers said that they have developed a new machine learning model with a self-modifying architecture. Called ‘HOPE’, the new model is said to be better at long-context memory management than existing state-of-the-art AI models.
It is meant to serve as a proof-of-concept for a novel approach known as ‘nested learning’ devised by Google researchers, where a single model is treated as a “system of interconnected, multi-level learning problems that are optimized simultaneously” instead of one continuous process, the search giant said in a blog post on Saturday, November 8.
Google said that the new concept of ‘nested learning’ could help solve for limitations in modern large language models (LLMs) such as continual learning, which is a crucial stepping stone on the path to artificial general intelligence (AGI) or human-like intelligence.
Story continues below this ad

Last month, Andrej Karpathy, a widely respected AI/ML research scientist who formerly worked at Google DeepMind, said that AGI was still a decade away primarily because no one has been able to develop an AI system that learns continually — at least so far. “They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues,” Karpathy said in an appearance on a podcast.
“We believe the Nested Learning paradigm offers a robust foundation for closing the gap between the limited, forgetting nature of current LLMs and the remarkable continual learning abilities of the human brain,” Google said. The findings of the researchers were published in a paper titled ‘Nested Learning: The Illusion of Deep Learning Architectures’ at NeurIPS 2025.
What is continual learning? Why is it a challenge?
LLMs that power AI chatbots are currently capable of writing sonnets and generating code in a matter of seconds. However, they do not yet possess the rudimentary ability to learn from experience.
Unlike the human brain, which continually learns and improves, today’s LLMs cannot gain new knowledge or skills without forgetting what they already know. This inability is referred to as ‘catastrophic forgetting’ (CF).Story continues below this ad

For years, researchers have been looking to address CF by making adjustments to the model’s architecture or coming up with better optimisation techniques. However, Google’s researchers argue that the model’s architecture and the rules used to train it (i.e., the optimisation algorithm) are fundamentally the same concepts.
“By recognising this inherent structure, Nested Learning provides a new, previously invisible dimension for designing more capable AI, allowing us to build learning components with deeper computational depth, which ultimately helps solve issues like catastrophic forgetting,” the researchers wrote.
What is nested learning?
According to the researchers, the concept of Nested Learning looks at a complex ML model as “a set of coherent, interconnected optimization problems nested within each other or running in parallel.” “Each of these internal problems has its own context flow — its own distinct set of information from which it is trying to learn,” they added.

By drawing on these principles, developers will be able to build learning components in LLMs with deeper computational depth, Google said. “The resulting models, like the Hope architecture, show that a principled approach to unifying these elements can lead to more expressive, capable, and efficient learning algorithms,” it further said.Story continues below this ad
The proof-of-concept model, HOPE, demonstrated lower perplexity and higher accuracy compared to modern LLMs when tested on a diverse set of commonly used and public language modeling and common-sense reasoning tasks, as per the company.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *