UCL Institute of Ophthalmology


Vision@UCL Talks

Visual@UCL Talks is a monthly-Talks Series featuring eminent national and international vision science speakers to stimulate connections in vision research.

Why and What For?

Vision science is an inherently interdisciplinary scientific field that spans psychophysics, imaging and computational modelling methods. Work in this area has high societal impact through its crucial contributions to clinical research on eye disease, computer vision, and many other applied fields. To celebrate UCL's leading role in vision science, UCL Institutions at the forefront of this field have now joined forces to organise a new talks series called Vision @UCL Talks. The series features eminent international vision science speakers, who bridge the fields of psychophysics, fMRI, and computational modelling in vision science. Funding for these sessions has kindly been provided by the NIH Moorfields Biomedical Research Centre, UCL Centre for Computation, Mathematics and Physics in the Life Sciences and Experimental Biology, and Cambridge Research Systems.

Vision @UCL Talks are free events open to everyone (PhD student, researchers, clinicians, non-scientific public interested in science, etc). Each talk is preceded by a science workshop for PhD students from all UCL departments, and followed by drinks.

Upcoming Talks

9 November, 12-1pm: Prof Jochen Triesch, Frankfurt Institute for Advanced Studies. Learning to see without supervision 

Abstract: How do infants learn to make sense of the visual world around them? In contrast to how today’s computer vision systems are typically trained, infants learn to see in a much more autonomous and active fashion. In this talk, I will present our efforts to better understand infants' learning processes through computational modelling and to develop computer vision systems that can learn in a similarly autonomous fashion. In particular, I will focus on active binocular and motion vision and object perception

 23 November, 2-3pm: Dr Meg Schlichting (University of Toronto). Developmental differences in memory reactivation during learning

Abstract: Adults can reactivate their related existing memories while they learn new things, which allows for the formation of flexible new connections in memory. In this talk I will describe our recent fMRI work showing that children and adolescents do not reactivate memories during encoding, which might help to explain their greater mnemonic rigidity. 

7 December, 12-1pm - Dr Reshanne Reeder (Edge Hill University) - Ganzflicker: Past, present, and future of light-induced psychedelic experience

Abstract: Rhythmic flashing light, or “Ganzflicker”, can create altered states of consciousness and pseudo-hallucinations, bringing your mind’s eye out into the real world. In this talk, I will discuss how Ganzflicker has been used in art and science, how it can help us understand individual differences, and even tap into the neural basis of subjective perceptual experience.

Find out more about Dr Reshanne's upcoming Ganzflicker exhibition.

Subscribe to our mailing list

Past Talks


26 November 2019 at 1pm - Prof. Simon Rushton (University of Cardiff) - Why are we sensitive to optic flow? In science fiction films the movement of stars seen from inside the spaceship gives the viewer a vivid sensation of forward movement.  The movement of the stars creates “optic flow”, a similar pattern of image motion is generated whenever we move.  The human brain is exquisitely sensitive to optic flow.  Why?  I will outline some potential answers.

The standard answer is that we use it to guide locomotion, an intuitive idea that originated with Grindley in the 1940s and was popularised by Gibson.  In early work, I challenged this idea and suggested that humans use the egocentric direction of the target to guide locomotion.  More recently, with colleagues, I have been looking at the role of “allocentric location” cues in guiding locomotion in enclosed spaces.  This work leaves the standard answer looking rather shaky.
With Paul Warren I put forward an alternative suggestion for why we are sensitive to optic flow - to help in the identification of object movement during self-movement (“flow-parsing”).  We, and others, have now put together a fairly substantial body of work that addresses this hypothesis.  I’ll give a brief overview of this work.
I’ll finish with a tentative idea.  If we were to directly see the images on our retinas we would see a world in continual motion.  This is obviously not what we perceive, we see a stable scene.  I suggest that the brain’s sensitivity to optic flow could provide a mechanism for predicting, and hence, perceptually stabilising, the retinal image. 

More information on Prof Simon Rushton and his work.

22 October 2019 - Prof. Julie Harris (University of St. Andrews, UK) - Counter-shading camouflage: can shape from shading in nature be hidden? Shape-from shading is a powerful source of visual information about depth and shape. It has been argued that some animals have evolved a form of camouflage to counter the effects of shape from shading.  This patterning is called countershading: a darker colour on the back and lighter on the belly, and it appears in many taxa and environments.  The countershading pattern has been proposed to contribute to visual camouflage by counterbalancing the gradient of illumination on the body to deliver reduced shading.  But the actual counter-shading pattern on animals has not been adequately quantified, and empirical studies that explore it have not attempted to predict optimal shading patterns. We have examined the camouflage counter-shading hypothesis in a number of ways: (i) by developing a physical light model of counter-shaded animals in realistic environments and using the model to make specific predictions, (ii) by testing empirically the effect of different countershading patterns on visual detection performance, and (iii) measuring the three-dimensional form and reflectance of counter-shaded species of caterpillar to determine how the real pattern could contribute to concealing shape. My talk will describe how we have been able to demonstrate that counter-shading can provide a form of camouflage that conceals three-dimensional shape, and how vision science can successfully intersect with behavioural ecology.

More information on Prof Julie Harris and her work.

26 September 2019 - Prof Janette Atkinson  & Prof Oliver Braddick (UCL & University of Oxford, UK) - Is visual attention the key to unanswered questions about children's development of refraction, strabismus, and cortical function 
Forty years of progress in analysing children’s visual development have provided key data on many questions, but pose some unresolved and possibly linked questions on the path to normal and atypical visual outcomes. We will discuss:

  1. Why does hyperopia lead to strabismus? Here we use data from our large Cambridge Infant Refractive Screening Programmes, aimed at preventing strabismus and amblyopia (Atkinson et al, Optom Vis Sci, 2007)

  2. What do neurodevelopmental disorders, both acquired and genetic, have in common? Here we use data from our Early Childhood Attention Battery ECAB (Breckenridge, Atkinson, Braddick, Brit J Dev Psy, 2013) in children with Williams Syndrome, Down  Syndrome, Fragile-X, premature birth and perinatal brain injury, who share a cluster of deficits associated with the dorsal cortical stream ( Atkinson, J Vis, 2017)) 

  3. What role do recurrent pathways play in visual development? Here we use data on development of motion processing, using our behavioural, EEG and neuroimaging measures of infant and child vision to understand how recurrent pathways module the visual input (Wattam-B ell et al, Current Biol, 2010) 

We will discuss the role of visual attention and its neural underpinnings, in determining children’s functional abilities, and argue for deficits in specific components of attention as a key factor in developmental disorders.

16 April 2019 - Dr Benjamin De Haas (Justus-Liebig-Universität Gießen, Germany) - 'Where' in the ventral stream? Pointers from feature-location tuning and individual salience.
Influential models of vision propose a cortical division of labor, with spatial attention ('where') being processed in dorsal areas and the identity of visual objects ('what') in a separate, ventral stream. However, a number of recent findings suggest tight links between object identity and gaze behavior. For instance, 1) where we look in a natural scene is most strongly driven by semantic object attributes and 2) how we study visual objects of expertise matches contingencies between spatial and feature tuning. Here, I will present fMRI data on 2) showing that neuronal populations in the human inferior occipital gyrus (IOG) are organized along correlated gradients of spatial and face-part tuning. I will also present work on 1), showing systematic individual differences in gaze behavior, which ongoing work exploits to probe the neural basis of semantic salience. Finally, I will discuss the hypothesis that the ventral stream is crucial for 'where' we look, as well as 'what' we see. 

For more information about Dr. Benjamin de Haas and his work: https://bendehaas.wordpress.com/

1 May 2019 - Dr Christopher Henry (Kohn Lab, Albert Einstein College of Medicine, USA) - Crowding alters feature encoding in macaque visual cortex
We are good at judging the visual features of objects in peripheral vision when they are viewed in isolation. However, when these same objects are surrounded by adjacent stimuli in the visual field, local feature discrimination is often impaired. This phenomenon, known as visual crowding, has been extensively studied at the behavioral level in humans, revealing many factors that affect local feature perception. In contrast, little is known at a mechanistic level about how crowding affects the activity of single neurons or populations of neurons at distinct stages of the visual system, which together ultimately give rise to these altered sensory percepts. We studied crowding behaviorally and neurophysiologically in macaque monkeys, an excellent animal model of the human visual system. Animals were trained to perform a fine orientation discrimination task for peripheral targets. Overall, monkeys exhibited strong perceptual crowding; effect size and error patterns were similar to those of human observers.

We then asked how crowding altered the representation of target orientation in neuronal populations at the first stage of cortical processing, primary visual cortex (V1). Crowding stimuli strongly modulated the average firing rate of individual V1 neurons, suppressing most but facilitating others, which resulted in changes of the tuning curves for target orientation. In addition, crowding produced modest but significant changes in both individual neuronal variability (Fano factor) and shared variability (pairwise spike count correlation). At the V1 population level, this resulted in moderate information losses in encoded target orientation under crowding, and was driven largely by the varied change in average neuronal firing rates. We show that both increases and decreases in overall firing under crowding can result in losses of feature information, provided these modulations are of the right form. The extent to which changes in V1 populations under crowding limit local feature perception will depend on how these signals are read out and integrated in downstream cortical circuits. Our results show that the effects of crowding in peripheral vision are already evident in the first stages of cortical processing. 

 26 March 2019 - Wired Health 2019 (London, UK) 
Location: The Francis Crick Institute, 1 Midland Rd, London NW1 1ST, UK

Our scientists from the Child Vision Lab, sponsored by the National Institute for Health Research (NIHR) Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, presented one of their latest projects on simulations of visual impairments in Augmented Rality (AR).

More information about the event


25 January 2018 - Dr Tomas Knapen (Free University Amsterdan, Netherlands) - Mapping the dark side: retinotopic organization in the default mode network. The brain's default network (DN) consists of a set of brain regions that consistently show decreases in BOLD signal during task engagement, when most areas show increases in BOLD signal. Recent findings have shown indications that these deactivations play some role in visual processing, but the nature and function of this well-known property of the DN remain unclear. We conducted a population receptive field (pRF) mapping experiment at 7T, in which participants directed their attention to a visual mapping stimulus during strict fixation, performing a colour discrimination task that was titrated to have equal difficulty regardless of stimulus position. This kept task load identical whenever a stimulus was on the screen, and allowed us to contrast task-related and spatially specific signals. 

      We demonstrate that signals in the DN carry retinotopic visual information. Specifically, BOLD decreases in several nodes of the DN are specific to the appearance of a visual stimulus in a circumscribed region of retinotopic space. We estimated population receptive fields of negative amplitude from BOLD time courses in the DMN, and show that a subset of these regions each contains a coherent retinotopic map along the cortical surface. Moreover, this description of spatial preferences in the DN, combined with ongoing activation patterns, allows us to reconstruct (decode) the position of a visual stimulus with a fidelity comparable to that of the known retinotopic maps of the intraparietal and precentral sulcus.

      Our results indicate that spatially specific activations and deactivations synergistically subserve the processing of visual information. DN regions have been shown to selectively activate for social information, autobiographical memory and mind wandering, types of cognition that require an emphasis on processing of internally sourced information. Thus, the balance between activations and deactivations could underpin the preferential processing of externally vs internally sourced information. Furthermore, it is likely that representations in retinotopic and other reference frames coincide in these regions. This overlap would allow local computations to integrate information processing from different reference frames. 

More information on Dr Tomas Knapen and his work.

21 February 2018 - Prof. Kalanit Grill-Spector (Standford University, USA) - Neural Mechanisms of the Development of Face Perception: How do brain mechanisms develop from childhood to adulthood leading to better face recognition? There is extensive debate if brain development is due to pruning or growth. Here I will describe results from a series of recent experiments using new MRI methods in children and adults together with analysis of postmortem histology that tested these competing theories.

      Anatomically, we examined if there are developmental increases or decreases to macromolecular tissue in the gray matter and how anatomical development impacts function and behavior. Functionally, we examined if and how neural sensitivity to faces, as well spatial computations by population receptive fields develop from childhood to adulthood. Critically, we tested how these neural developments relate to perceptual discriminability to face identity and looking behavior, respectively.

      Together, our data reveal a tripartite relationship between anatomical, functional, and behavioral development and suggest that emergent brain function and behavior during childhood result from cortical tissue growth rather than pruning. 

More information on Prof Kalanit Grill-Spector and her work.

22 February 2018 - Prof. David Brainard (University of Pennsylvania, USA) - UCL WS Stiles memorial lecture. For more information about Prof. David Brainard and his work: https://color.psych.upenn.edu

23 April 2018 - Prof. Peter Bex (Northeastern University, USA) - Assessment, Simulation and Correction of Binocular Vision Impairment. Current clinical binocular assessment methods depend primarily on insensitive tests of stereoacuity (e.g. Stereo Fly), suppression (e.g. Worth 4 dot) and ocular alignment (e.g. Cover Test). Recent virtual reality-based approaches to the treatment of binocular vision impairment have enabled much greater control of therapeutic stimuli but require more sensitive assessments of their efficacy over current treatments. We have developed a range of novel tests that quantify the spatial frequency-dependence of contrast sensitivity, inter-ocular suppression and stereo-acuity; and the eye posture-dependence of ocular alignment. These tests take less than 5 minutes each to complete yet show high sensitivity and reliability.

      We show that treatments based on interocular manipulations of blur, contrast, and luminance have profound consequences for oculomotor control and depth perception. Furthermore, these approaches do not address ocular misalignment that may limit treatment outcomes. We show that dichoptic saccade adaptation can transiently induce and reverse interocular alignment and alters depth perception. This work aims to provide a comprehensive framework for the assessment and correction of binocular vision deficits.

For more information about Prof. Peter Bex and his work: http://www.northeastern.edu/bexlab/

24 May 2018 - Dr Wei Wang (Institute of Neurosciences, Chinese Academy of Sciences, China) - Cortical mechanisms underlying integration of local visual cues to form global representations. Human and non‐human primates effortlessly see both global and local features of objects in great detail. However, how the cortex integrates local visual cues to form global representations along visual hierarchies remains mysterious, particularly considering a long-standing paradox in vision as neurally encoded complexity increases along the visual hierarchy, the known acuity or resolving power dramatically decreases. Putting it simply, how do we simultaneously recognize the face of our child, while still resolving the individual hairs of her or his eyelashes? Many models of visual processing follow the idea that low-level resolution and position information is discarded to yield high-level representations (including cutting-edge deep learning models). These are themes that are fundamental to conceiving how the brain does sensory transformation! 

 Combining large-scale imaging of high spatial resolution to record the transformation of information across three visual areas simultaneously (V1, V2, and V4) with electrophysiological multi-site laminar recordings, we found a bottom-up cascade of cortical integration of local visual cues as general cortical mechanisms for global representations in primate ventral and dorsal streams. The integrated neural responses are dependent on the sizes and preferences of their receptive fields. Recently, we reveal an unexpected neural clustering preserving visual acuity from V1 into V4, enabling a detailed spatiotemporal separation of local and global features along the object-processing hierarchy, suggesting that higher acuities are retained to later stages where more detailed cognitive behaviour occurs. The study reinforces the point that neurons in V4 (and most likely also in infero-temporal cortex) do not necessarily need to have only low visual acuity, which may begin to resolve the long-standing paradox concerning fine visual discrimination. Thus, our research will prompt further studies to probe how preservation of low-level information is useful for higher-level vision and provide new ideas to inspire the next generation of deep neural network architectures.

For more information about Dr. Wei Wang and his work: http://www.ion.ac.cn/laboratories/int.asp?id=44

20 September 2018 - Dr Valérie Goffaux (Université Catholique de Louvain, Belgium) - Orientation and spatial frequency encoding in the ventral visual pathway: the case of face perception in humans. Visual perception results from a complex chain of processes, starting with the selective encoding of spatial frequency and orientation in primary visual cortex. More anterior high-level regions have increasingly larger receptive field making them selective to increasingly more complex shape properties or to specific visual categories such as faces. This high-level specialization has led researchers to focus their investigation on the high-level properties of face perception.

      Our work combines multiple investigation techniques: psychophysics, electrophysiology (scalp EEG, steady-state and temporal generalization), as well as neuroimaging (fMRI of V1 and high-level visual regions) and suggests that the specialization of face processing, though emerging at high-level stages of visual processing, roots into selective ranges of the primary orientation and SF information in V1. These findings encourage the adoption of more integrative approaches to face perception, and vision in general, encompassing both low- and high-level visual mechanisms.

For more information about Dr. Valérie Goffaux and her work: https://sites.uclouvain.be/goffauxlab/index.html

25 October 2018 - Prof Steven Dakin (The University of Auckland, New Zealand) - Using technology to improve assessment and treatment of vision in children. Measuring vision is important in a range of clinical settings, from monitoring the effectiveness of gene therapies for blinding eye conditions through to early detection of age-related eye diseases like glaucoma.  My lab uses technologies such as infrared eye tracking and virtual reality to measure and correct vision in groups that can be difficult to assess and/or treat.

In this talk I will give overviews of several projects – inspired by basic psychophysical research – that deliver more reliable visual assessment, and novel platforms for treatment, of children. This includes: a new set of symbols and tablet-based testing system for measuring acuity, use of involuntary eye movements to estimate contrast sensitivity, and an amblyopia-therapy system using a handheld gaming system. 

For more information about Prof. Steven Dakin and his work: http://www.homepages.ucl.ac.uk/~smgxscd/DakinLab/Main.html

29th November 2018 - Prof. Joshua Solomon (Centre for Applied Vision Research, City University of London, UK) - Visual categorisation of simple stimuli. Stimuli that vary along quantitative (or ‘prothetic’) continuua can be categorized on the basis of how much neural activity they elicit. Loud and bright stimuli elicit more activity than quiet and dim stimuli. Stimuli that vary qualitatively (either along metathetic continuua or between prothetic continuua) cannot be categorized on this basis. It is conceivable that our brains contain homunculi who read tiny signs attached to each fibre that describe their neurons’ preferred stimuli, but contemporary theorists believe category information to be inherent in the cerebral positions of active neurons. This helps explain why stimulus preferences vary systematically in the cortex, forming multi-dimensional ‘maps’ of position, orientation, and possibly other attributes such as spatial frequency, binocular disparity, and chromaticity. Nonetheless, neurons and channels that can be distinguished on the basis of the stimuli they prefer are still called ‘labelled lines’. Various psychophysical methods exist for quantifying channel selectivity and deciding whether those channels qualify as labelled lines.

      The focus of my talk will be a new model for the 2 x 2 FC paradigm, in which observers must both detect and identify (or categorize) modulations in a visual stimulus. Heretofore, results from this paradigm have either been interpreted in lieu of any models (e.g. ‘labelled lines are implied when threshold for identification equals threshold for detection’) or easily falsifiable high-threshold theories of detection. My new model is based on signal-detection theory, and accommodates a wide range of relationships between detection and identification.

For more information about Prof. Joshua Solomon and his work: http://www.staff.city.ac.uk/~solomon/ 

20th September 2018 - New Scientist Live 2018 (London, UK)

Location: EXcel London, Royal Victoria Dock, 1 Western Gateway, London E16 1XL

Our scientists from the Child Vision Lab presented some cool activities, involving virtual reality (VR), colour vision, optical illusions, visuomotor/economic games, and more!

For more information about the event, follow this link: https://live.newscientist.com

25 July 2018 - Science Museum Lates: Medical Marvels (London, UK)

Location: Science Museum, Exhibition Road, South Kensington, London, SW7 2DD

For more information about the event, follow this link: https://www.sciencemuseum.org.uk/see-and-do/lates  

17-23 May 2018 - Vision Science Society 2018 Symposium (St Pete Beach, Florida USA)

Location: TradeWinds Island Grand Resort, 5500 Gulf Blvd, St Pete Beach, FL 33706, USA

Every year, the Vision Science Society organises a symposium, held in St. Pete Beach (Florida, USA), at the TradeWinds Island Grand Resort. During a week, researchers from every part of the globe are meeting to share and discuss their most recent findings!

In 2018, two of our researchers (Dr. Imogen Large and M. Hugo Chow-Wing-Bom) have presented talks and posters about our work in virtual reality, visuomotor decision, and contrast sensitivity.

- Body positioning in realistic ball interception account for visuomotor idiosyncrasies (Talk by Dr. Imogen Large) 
- The contrast sensitivity function in children: Bayesian adaptive estimation using QUEST+ (Poster by Ms. Mahtab Farahbakhsh)
- Virtual Reality [VR] as a tool to assess the effect of asymmetrical vision loss on visual search performance (Poster by M. Hugo Chow-Wing-Bom)

For more information about the event, follow this link: https://www.visionsciences.org
For reprints or further information about the talk and posters, email us at ioo-cvl@ucl.ac.uk

17-18 April 2018 - Vision for the Commonwealth: Bringing vision to everyone, everywhere (London, UK)

Location: The Queen Elizabeth Diamond Jubilee Trust, 128 Buckingham Palace Road, London SW1W 9SA

For more information about our involvement in this event, follow this link: http://www.brcophthalmology.org/dr-pete-jones-demonstrates-pioneering-virtual-reality-technology-heads-government

For more information about the event, follow this link: https://www.visionforthecommonwealth.com


1 December 2017 - Friday Late Spectacular: Your Reality is Broken at The Wellcome Collection (London, UK )

Location: Wellcome Collection, 183 Euston Rd, London NW1 2BE

For more information, follow this link: https://wellcomecollection.org/events/friday-late-spectacular-your-reality-broken

26 October 2017 - See Science Festival at St. Luke's Community Centre (London, UK )

Location: St Luke's Community Centre, 90 Central Street, London EC1V 8AJ

Engage with researchers from UCL Institute of Ophthalmology in a fun, interactive and accessible way!