TipsMake
Newest

Google Research has successfully simulated the adaptive and flexible memory capabilities of AI.

Google Research has just announced a significant breakthrough in the field of machine learning: a new technique called Nested Learning, designed to address one of AI's long-standing Achilles' heels — the phenomenon of 'catastrophic forgetting' in continuous learning.

 

Simply put, 'catastrophic forgetting' occurs when an AI model is trained to acquire new knowledge or skills, but in the process inadvertently forgets what it already knew. This makes the model 'unstable,' much like a student learning a new lesson but completely forgetting the old one.

Inspired by the human brain

According to Google's research team, Nested Learning is developed based on the workings of the human brain, specifically neuroplasticity—the brain's ability to change and adapt when receiving new information without erasing old memories.

 

Google describes this technique as a solid foundation for bridging the gap between current language models (LLMs) and the flexible memory capabilities of humans.

What makes Nested Learning unique is its complete rethinking of AI's optimal structure and algorithms. Previously, developers viewed model architecture and optimization algorithms as two separate components; Nested Learning, however, combines them into a unified whole.

Google Research has successfully simulated the adaptive and flexible memory capabilities of AI. Picture 1

 

The learning model follows a 'multi-tiered speed' approach.

With Nested Learning, an AI model is broken down into 'nested optimization problems'. Each of these smaller parts is allowed to learn and update its knowledge at a separate rate—a technique known as multi-time-scale updates.

This mechanism accurately mimics how the human brain processes information: when receiving a new experience, only a part of the brain changes to adapt, while the rest retains old memories.
As a result, AI can learn new things without 'erasing' previous knowledge, creating a more layered, flexible, and sustainable learning system.

The 'Hope' experimental model

Based on this principle, Google Research developed an experimental model called Hope—an acronym for Hierarchical Optimization and Plasticity Engine . Hope is a self-modifying recurrent architecture, capable of optimizing its own memory.

This model utilizes Continuum Memory Systems, treating memory not as simply 'short-term' and 'long-term' memory, but as a continuous spectrum of different memory layers, each updated at its own pace. This allows Hope to store, process, and maintain richer information over time, while also preventing the 'forgetting of old knowledge' when learning new content.

Test results show that Hope outperforms current models in long-context memory tasks, particularly the 'Needle-In-Haystack' task, where the AI ​​has to retrieve a small detail hidden deep within a long text. Additionally, the model also demonstrates higher accuracy and performance in general language tests.

Although Google hasn't announced a specific rollout date, the research team says that improvements from Nested Learning and the Hope model will soon be integrated into future versions of Google Gemini. If successful, this could be a turning point, bringing AI closer to the ability to remember and learn flexibly like humans.

Lesley Montoya
Share by Lesley Montoya
Update 24 January 2026