How the brain forms habits with dual learning system
15 May 2025
The brain uses a dual system for learning through trial and error, according to a new study in mice led by UCL researchers.
This is the first time a second learning system has been identified, which could help explain how habits are formed, and provide a scientific basis for new strategies to address conditions related to habitual learning, such as addictions and compulsions.
Published in Nature, the study could also have implications for developing therapeutics for Parkinson’s.
The study’s lead author Dr Marcus Stephenson-Jones (Sainsbury Wellcome Centre at UCL) said: “Essentially, we have found a mechanism that we think is responsible for habits. Once you have developed a preference for a certain action, then you can bypass your value-based system and just rely on your default policy of what you’ve done in the past. This might then allow you to free up cognitive resources to make value-based decisions about something else.”
The researchers uncovered a dopamine signal in the brain that acts as a different kind of teaching signal to the one previously known. Dopamine signals in the brain were already understood to form reward prediction errors, where they signal to the animal whether an actual outcome is better or worse than expected.
In this new study, the scientists discovered that, in parallel to reward prediction errors (RPE), there is an additional dopamine signal, called action prediction error (APE), which updates how often an action is performed. These two teaching signals give animals two different ways of learning to make a choice, learning to choose either the most valuable option or the most frequent option.
Dr Stephenson-Jones explained: “Imagine going to your local sandwich shop. The first time you go, you might take your time choosing a sandwich and, depending on which you pick, you may or may not like it. But if you go back to the shop on many occasions, you no longer spend time wondering which sandwich to select and instead start picking one you like by default. We think it is the APE dopamine signal in the brain that is allowing you to store this default policy.”
The newly discovered learning system provides a much simpler way of storing information than having to directly compare the value of different options. This might free up the brain to multi-task. For example, once you have learned to drive, you can also hold a conversation with someone during your journey. While your default system is doing all the repetitive tasks to drive the car, your value-based system can decide what to talk about.
Previous research discovered the dopamine neurons needed for learning reside in three areas of the midbrain: the ventral tegmental area, substantia nigra pars compacta, and substantia nigra pars lateralis. While some studies showed that these neurons were involved in coding for reward, earlier research found that half of these neurons code for movement, but the reason remained a mystery.
RPE neurons project to all areas of a brain region called the striatum (a critical component of the movement control and reward systems) apart from one, called the tail of the striatum, whereas the movement-specific neurons project to all areas apart from the nucleus accumbens. This suggests that the nucleus accumbens exclusively signals reward, and the tail of the striatum exclusively signals movement.
By investigating the tail of the striatum, the team were able to isolate the movement neurons and discover their function. To test this, the researchers used an auditory discrimination task in mice, which was originally developed by scientists at Cold Spring Harbor Laboratory in the US. Co-first authors Dr Francesca Greenstreet, Dr Hernando Martinez Vergara and Dr Yvonne Johansson used a genetically encoded dopamine sensor, which showed that dopamine release in this area was not related to reward, but it was related to movement.
Dr Stephenson-Jones explained: “When we lesioned the tail of the striatum, we found a very characteristic pattern. We observed that lesioned mice and control mice initially learn in the same way, but once they get to about 60-70% performance, i.e. when they develop a preference (for example, for a high tone go left, for a low tone, go right), then the control mice rapidly learn and develop expert performance, whereas the lesioned mice only continue to learn in a linear fashion. This is because the lesioned mice can only use RPE, whereas the control mice have two learning systems, RPE and APE, which contribute to the choice.”
To further understand this, the team silenced the tail of striatum in expert mice and found that this had a catastrophic effect on their performance in the task. This showed that while in early learning animals form a preference using the value-based system based on RPE, in late learning they switch to exclusively use APE in the tail of striatum to store these stable associations and drive their choice. The team also used extensive computational modelling, led by Dr Claudia Clopath at UCL, to understand how the two systems, RPE and action prediction error, learn together.
These findings hint at why it is so hard to break bad habits and why replacing an action with something else may be the best strategy. If you replace an action consistently enough, such as chewing on nicotine gum instead of smoking, the action prediction error system may be able to take over and form a new habit on top of the other one.
Dr Stephenson-Jones commented: “Now that we know this second learning system exists in the brain, we have a scientific basis for developing new strategies to break bad habits. Up until now, most research on addictions and compulsions has focused on the nucleus accumbens. Our research has opened up a new place to look in the brain for potential therapeutic targets.”
This research also has potential implications for Parkinson’s, which is known to be caused by the death of midbrain dopamine neurons, specifically in substantia nigra pars compacta. The type of cells that have been shown to die are movement-related dopamine neurons, which may be responsible for coding APE. This may explain why people with Parkinson’s experience deficits in doing habitual behaviours such as walking, however they do not experience deficits in more flexible behaviours such as ice skating.
Dr Stephenson-Jones concluded: “Suddenly, we now have a theory for paradoxical movement in Parkinson’s. The movement related neurons that die are the ones that drive habitual behaviour. And so, movement that uses the habitual system is compromised, but movement that uses your value-based flexible system is fine. This gives us a new place to look in the brain and a new way of thinking about Parkinson’s.”
The research team is now testing whether action prediction error is really needed for habits. They are also exploring what exactly is being learned in each system and how the two work together.
This research was supported by EMBO, Swedish Research Council, the Sainsbury Wellcome Centre Core Grant from the Gatsby Charity Foundation and Wellcome, and the European Research Council.
Links
- Research paper in Nature
- Dr Marcus Stephenson-Jones’s academic profile
- Stephenson-Jones lab
- Sainsbury Wellcome Centre at UCL
Image
- Triggered cell death in the tail of striatum in the mouse brain using a viral strategy. Each image is an average projection of a part of the striatum. Areas are color coded based on their absence of neurons. Original colours are random to indicate the amount of animals that do not have cells on a particular region. The average projections then creates a mix of those original colours.
Credit: Hernando Martinez Vergara
Media contacts
Chris Lane
tel: +44 20 7679 9222 / +44 (0) 7717 728648
E: chris.lane [at] ucl.ac.uk
April Cashin-Garbutt
Head of Research Communications and Engagement, Sainsbury Wellcome Centre at UCL
T: +44 (0)20 3108 8028
E: a.cashin-garbutt [at] ucl.ac.uk
Close
