1. Home
  2. News & Publications
  3. Research News

Dec. 3, 2025 Research Highlight Biology

Taming chaos in neural networks

A biologically plausible way to control chaos in artificial neural networks could provide insights into how the brain works

image of Lorenz attractor

Figure 1: The Lorenz attractor is an example of complex dynamical systems. Two neuroscientists have discovered a way to learn complex dynamics in a biologically plausible way. © Reproduced from Ref. 1 and licensed under CC BY 4.0 © 2025 T. Asabuki et al.

A new framework that causes artificial neural networks to mimic how real neural networks operate in the brain has been developed by a RIKEN neuroscientist and his collaborator1.

As well as shedding light on how the brain works, this development could help inspire new AI systems that learn in a brain-like way.

Humans and animals have remarkable capacities to learn and perform complex tasks thanks to their brains. That’s due to the brain’s amazing ability to take in sensory information and produce complex outputs.

Toshitake Asabuki of the RIKEN Center for Brain Science is fascinated by the brain and wants to discover how it works.

“My team investigates how the brain learns efficiently and robustly,” he says. “We aim to identify learning rules that could, in principle, be implemented by real neural circuits in the brain.”

A promising way to mimic real neural circuits in the brain is to use artificial ones known as recurrent neural networks. The neurons in these networks influence each other in loops.

“Unlike simple feedforward models, recurrent networks can store traces of past activity, enabling them to represent time, memory and context,” explains Asabuki. “That’s why they’re often seen as the closest mathematical analog to real brain circuits.”

Recurrent neural networks often generate chaotic output that can vary dramatically with just a small change in the input. This is both a good and bad thing.

“Chaos gives the system rich dynamics that can support flexible learning and generalization,” says Asabuki. “But it also makes the system unstable and difficult to train. So there’s a trade-off between richness and control.”

So far, harnessing such complexity has been the main challenge. Several learning rules have been proposed, but they are implausible from a biological perspective.

Now, Asabuki and Claudia Clopath of Imperial College London, UK, have found a way to do it in a biologically plausible way.

“Our study shows that a neural network can stabilize its chaotic activity through a biologically plausible mechanism, without relying on unrealistic computations,” says Asabuki.

The pair started from a simple question: if the brain is constantly predicting the future, could prediction itself be used as a stabilizing force?

“We designed a learning rule that allows each neuron to predict the future output of the network,” says Asabuki. “By aligning these predictions with the actual feedback signals, the network gradually learns to suppress its chaotic dynamics.” Through simulation, they found that this rule led to remarkably smooth transitions. “We were surprised by how efficiently the network stabilized itself,” says Asabuki. “It was both simple and powerful.”

Picture of Toshitake Asabuki

Toshitake Asabuki and a co-worker have found a biologically plausible way to control chaos in recurrent neural networks. © 2025 RIKEN

Related contents

Rate this article

Stars

Thank you!

Reference

  • 1.Asabuki, T. & Clopath, C. Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks. Nature Communications 16 6784 (2025). doi: 10.1038/s41467-025-61309-9

Top