Neurophysiology of decision-making and action selection
Neural basis of evidence integration and decision-making processes
Lead Investigator: Sean Cavanagh
Decision-making can be thought of as a process of evidence accumulation in favour of different alternatives over time. This process is terminated once a decision boundary is reached and we commit to a categorical choice. While this framework has proven useful in explaining decision accuracy and neural activity within the posterior parietal cortex, several important questions remain unanswered.
In order to explore the neural mechanisms that support evidence accumulation, we record single neuron activity in different parts of prefrontal cortex (PFC) and the lateral intraparietal area (LIP) while subjects integrate evidence over time. This will provide a better understanding of how different brain areas interact causally to generate decisions.
Understanding evidence accumulation is especially important as many neuropsychiatric conditions are characterised by decision-making deficits. It has been extensively demonstrated that the decision-making impairments associated with psychosis can be attributed to impaired accumulation of evidence at the behavioural level. By studying evidence accumulation in a pharmacological model of psychosis, we hope to shed new light upon the neural mechanism underlying the impairments experienced in patients.
Information gathering strategies inform neural computations of value based decision making:
Lead Investigator: Nish Malalasekera
Optimal decision-making depends on gathering sufficient information to determine the best outcome. Patients with damage to the ventromedial prefrontal cortex (vmPFC) not only make poor choices, they also use different information to make a choice (Fellows, Brain, 2006); whereas normal subjects might make a choice between different housing options by using a within-attribute strategy (e.g. first compare the attribute “price” across all options, then compare the attribute “location” across all options, etc.), patients with damage to vmPFC tend to use a within-option comparison strategy (e.g., determine the cost, size, location, etc. of an individual house option before collecting any information about another housing option). Potentially as a consequence of this different information gathering strategy, vmPFC patients tend to make inconsistent choices relative to their needs and goals. These findings suggest a critical function of vmPFC may be to regulate the process by which choices are compared, and to determine what information is most important in making a choice.
To explore the neural mechanisms that support
value-based decision-making, we record activity in different parts of
prefrontal cortex (PFC) while subjects freely gather information using either
“within-attribute” or “within-option” comparison strategies. This allows us to
not only examine the neural representation of these two different comparison
strategies, but also allows us to track the evolution of the decision process
from the decision components (attributes) into an integrated final (option)
choice. We also build normative models which describe the optimal information
gathering strategy in different contexts. Our aim is to identify unique and
essential neural computations served by different areas of PFC, and to link
these computations to cases where rational or irrational decisions were made.
The neural basis of reinforcement learning models:
Lead Investigator: Dr. Bruno Miranda
Humans and animals can learn to influence their environment using at least two different learning strategies. In model-free (MF) learning, action selection is only influenced by cached values estimated by trial‐and‐error through experience. On the other hand, model-based (MB) learning is prospective and uses a learned structure of the environment to make informed predictions (simulations) about the possible consequences of different choice options. Both strategies can run in parallel and compete or cooperate to make adaptive choices. Taking advantage of computational models from reinforcement learning theory, this project investigates the neural representation and interaction of MB and MF reinforcement learning in the prefrontal cortex and basal ganglia
Network modelling of decision making:
Lead Investigator: Dr. Laurence Hunt
It has recently become possible to estimate value correlates that emerge from network models of interconnected spiking neurons. These networks explain how a cortical region may support a choice via mutually inhibitory neural populations. Specifically, one can derive predictions of decision-related activity in both single unit data and also in the local field potential (LFP) signal derived from the computational model. The LFP is closely related to the magnetoencephalographic (MEG) signal that can be recorded in humans, and MEG has been used to identify regions supporting value-guided choice (Hunt et al., Nature Neuroscience, 2012).
The aim of this project is not only to bridge microscopic functional observations made at the level of single neurons with macroscopic observations in LFPs and whole-brain functional neuroimaging in humans by directly comparing neural spiking, LFP and MEG data, but also to use these different data types in biophysical models to describe how prefrontal cortex neurons coordinate and compete to guide value-based decision making.
For more details on Dr. Hunt’s research please
Hunt, LT, Kolling N, Soltani A, Woolrich MW, Rushworth MFS, Behrens TEJ (2012). Mechanisms underlying cortical activity during value-guided choice. Nature Neuroscience, 8: 470-6.