Skip to main content
UCL Logo Navigate back to homepage

Main navigation

  • Home
  • Study

    Study

    • Study at UCL
    • Prospective students
    • Current students
    • Accommodation
    • Careers
    • Doctoral School
    • Immigration and visas
    • Student finances
    • Support and wellbeing
  • Research

    Research

    • Research at UCL
    • Engage with us
    • Explore our Research
    • Initiatives and networks
    • Research news
  • Engage

    Engage

    • Engage with UCL
    • Alumni
    • Business partnerships and collaboration
    • Global engagement
    • News and Media relations
    • Public Policy
    • Schools and priority groups
    • Give to UCL
  • About

    About

    • About UCL
    • Who we are
    • Faculties
    • Governance
    • President and Provost
    • Strategy
    • UCL's Bicentenary
  • UCL Logo Active parent page: Life Sciences
    • Study
    • Research
    • Engage
    • Active parent page: Divisions, Departments and Centres
    • People
    • News and Events
    • About

Research

Research at the Gatsby Computational Neuroscience Unit focuses on the mathematical principles of learning, perception and action in brains and machines.

Writing equations on whiteboard

Breadcrumb trail

  • Faculty of Life Sciences
  • Gatsby Computational Neuroscience Unit

Faculty menu

  • Current page: Research
  • Study and work
  • People
  • News and events
  • About

Breadcrumb trail

  • Faculty of Life Sciences
  • Divisions, Departments and Centres
  • Gatsby Computational Neuroscience Unit
  • Research

We study the mathematical basis of intelligent behaviour in natural and artificial systems. 

In neuroscience, we work with experimentalists to reveal computational principles from neural activity and pursue theories that connect principles of learning and computation to neural circuits. Our work in machine learning is similarly directed to the understanding of fundamental computational principles, elaborating the mathematics that underlie data-based discovery of structure, predictability and causality.

Below is a brief overview of some common research themes in the Unit; visit each faculty member’s webpage to learn more.

How activity in neural populations reflects properties of stimuli, actions and internal cognitive variables is one of the most fundamental questions in neuroscience. We tackle this question in two ways: (1) we work with empirical data (particularly from large neuronal populations) to understand, process and formalise the information within them, and (2) we address theoretical issues associated with sophisticated versions of population codes. A common thread in much of our work is the robustness of perceptual and motor systems in the presence of unexpected noise, non-stationary environments and the concomitant uncertainty - a robustness that sets them apart from even the best artificial systems. We also study how neural representations may account for uncertainty in internal variables to achieve such robustness.

Biological neural networks exhibit rich dynamical behaviours. We study computations achieved by recurrent dynamical systems in varying degrees of biological realism, looking for general principles of computation-through-dynamics. These include data-driven models of motor cortex, dynamics in coupled excitatory-inhibitory systems, models of olfactory processing, etc. We also study the dynamical properties of active membrane processes associated with spiking.

Neural systems are remarkable in their ability to adapt to and learn from experience. We seek to understand the principles that guide this learning in many settings: based on sparse reinforcement or on rich teaching signals, or from the structure of the environment alone. Behavioural studies help identify the capabilities and limitations of biological learning. By looking for biologically plausible algorithmic solutions, theoretical work, cross-referenced to experimental data, addresses difficult problems in learning such as credit assignment (which synapse should adapt to improve prediction) and structure identification (how is the environment best parsed into its constituent causal components).

At the circuit level, learning has measurable physiological correlates in terms of changes at individual synapses and modifications of the stimulus-response properties of individual neurons. We study the theoretical significance of these changes, including the interpretation of spike-timing update rules for synaptic strength, the interaction of reinforcement and neuromodulation with receptive field plasticity, and the consequences of plastic changes on perceptual learning.

Although the principles of neural computation may apply broadly, theories can only be evaluated experimentally by considering specific neural systems. We develop theory and data analysis methods to investigate the organisational and computational principles behind physiological, anatomical and psychophysical observations in different subsystems of the brain. These range from sensory/perceptual systems (including vision, audition and olfaction), control systems underlying motor action, systems that effect choices and learning from reinforcement signals, and systems underlying more elaborate cognition such as context-driven decision making, mapping and contextual awareness, attention, and planning. See also Neural data analysis below.

Realistic models often require representing the dependencies between many random variables. Graphical models provide an elegant formalism for representing these dependencies and for implementing efficient probabilistic inference and decision making. We study novel algorithms for approximate inference and methods for learning both parameters and the structure of graphical models from data.

Difficult real-world pattern recognition and function learning problems require that the learning system be highly flexible. Kernel methods such as Gaussian processes and support vector machines are one way of defining highly flexible non-parametric models based on similarities between data points. Gaussian processes, which correspond to neural networks with infinitely many hidden neurons, have proved powerful at avoiding some of the common pitfalls of learning such as overfitting. We focus on how to make kernel methods even more flexible and efficient, how to learn the kernel from data, and how to use them in various applications. 

Bayesian statistics is a framework for doing inference by combining prior knowledge and data. It has been influential in the understanding of intelligent learning systems. We work on many areas of Bayesian statistics, including using variational methods to do inference efficiently in complex domains, model selection and non-parametric modelling, novel Markov chain methods, semi-supervised learning, and modelling temporal sequences.

Reinforcement learning studies how systems can actively learn about the transition and reward structure of their environments and come to choose appropriate actions. Apart from the links with conditioning and neuromodulation, we have studied aspects of the trade-off between exploration and exploitation, the effects of approximation and the divination of hierarchical structure.

Someone hands you a dataset that represent a small part of a large network, say, a social network or synaptic network. What can you learn about the network as a whole from this dataset? In order to be informative, how should sample data be selected from a network in the first place? Such questions are fundamental, but much harder to answer than one might expect. And where we have answers, they are often far from obvious. They lead to a rich nexus at the intersection of machine learning, statistics and probability. Ingredients range from Bayesian modelling and empirical risk minimisation, through old favourites like sufficient statistics and convex analysis, to symmetry properties and dynamical systems.

The brain is perhaps the most complex subject of empirical investigation in scientific history. The scale is staggering: over 10^11 neurons, each making on average 10^3 synapses, with computation occurring on scales from one dendritic spine to an entire cortical area. Experimental tools have enabled the collection of the massive amounts of data needed to characterise this system. However, to understand and interpret these data will require substantial strides in inferential and statistical techniques. In collaborations with experimentalists, we have adapted machine learning techniques to characterise data from multiple extracellular electrodes, from identified single cells, as well as from local-field/magnetoencephalographic recordings. These studies have the potential to introduce powerful, theoretically motivated ways of looking at neural data.

UCL footer

Visit

  • Bloomsbury Theatre and Studio
  • Library, Museums and Collections
  • UCL Maps
  • UCL Shop
  • Contact UCL

Students

  • Accommodation
  • Current Students
  • Moodle
  • Students' Union

Staff

  • Inside UCL
  • Staff Intranet
  • Work at UCL
  • Human Resources
UCL Logo

University College London, Gower Street, London, WC1E 6BT

Tel: +44 (0) 20 7679 2000

UCL social media menu

  • Link to Instagram
  • Link to LinkedIn
  • Link to Youtube
  • Link to TikTok
  • Link to Facebook
  • Link to Bluesky
  • Link to Threads
  • Link to Soundcloud
Here, it can happen.
Back to top

Essential

  • Disclaimer
  • Freedom of Information
  • Accessibility
  • Cookies
  • Privacy
  • Slavery statement
  • Log in

© 2026 UCL