XClose

UCL Institute of Ophthalmology

Home
Menu

Clone of Vision@UCL Talks

Visual@UCL Talks is a monthly-Talks Series featuring eminent national and international vision science speakers to stimulate connections in vision research.

Why and What For?

      Vision science is an inherently interdisciplinary scientific field that spans psychophysics, imaging and computational modelling methods. Work in this area has high societal impact through its crucial contributions to clinical research on eye disease, computer vision, and many other applied fields. To celebrate UCL's leading role in vision science, UCL Institutions at the forefront of this field have now joined forces to organise a new talks series called Vision @UCL Talks. The series features eminent international vision science speakers, who bridge the fields of psychophysics, fMRI, and computational modelling in vision science. Funding for these sessions has kindly been provided by the NIH Moorfields Biomedical Research Centre, UCL Centre for Computation, Mathematics and Physics in the Life Sciences and Experimental Biology, and Cambridge Research Systems.

      Vision @UCL Talks is a free event and is open to everyone (PhD student, researchers, clinicians, non-scientific public interested in science, etc). Each talk is preceded by a science workshop for PhD students from all UCL departments, and followed by drinks.

Subscribe to our mailing list!

If you want to be added to our mailing list and receive all our updates about the talks, please follow this link: http://eepurl.com/dhA525

When and Where?

Time and location of each talk are giving in Upcoming Talks.

Upcoming Talks

1st May 2019 at 4:00pm - Dr. Christopher Henry (Kohn Lab, Albert Einstein College of Medicine, USA) - Crowding alters feature encoding in macaque visual cortex

Location: Room G03 at UCL Psychology and Language Science, 26 Bedford Way, London WC1H 0AP
Time: 4.00pm-5.00pm

We are good at judging the visual features of objects in peripheral vision when they are viewed in isolation. However, when these same objects are surrounded by adjacent stimuli in the visual field, local feature discrimination is often impaired. This phenomenon, known as visual crowding, has been extensively studied at the behavioral level in humans, revealing many factors that affect local feature perception. In contrast, little is known at a mechanistic level about how crowding affects the activity of single neurons or populations of neurons at distinct stages of the visual system, which together ultimately give rise to these altered sensory percepts. We studied crowding behaviorally and neurophysiologically in macaque monkeys, an excellent animal model of the human visual system. Animals were trained to perform a fine orientation discrimination task for peripheral targets. Overall, monkeys exhibited strong perceptual crowding; effect size and error patterns were similar to those of human observers.

We then asked how crowding altered the representation of target orientation in neuronal populations at the first stage of cortical processing, primary visual cortex (V1). Crowding stimuli strongly modulated the average firing rate of individual V1 neurons, suppressing most but facilitating others, which resulted in changes of the tuning curves for target orientation. In addition, crowding produced modest but significant changes in both individual neuronal variability (Fano factor) and shared variability (pairwise spike count correlation). At the V1 population level, this resulted in moderate information losses in encoded target orientation under crowding, and was driven largely by the varied change in average neuronal firing rates. We show that both increases and decreases in overall firing under crowding can result in losses of feature information, provided these modulations are of the right form. The extent to which changes in V1 populations under crowding limit local feature perception will depend on how these signals are read out and integrated in downstream cortical circuits. Our results show that the effects of crowding in peripheral vision are already evident in the first stages of cortical processing.

Past Talks

2019

16th April 2019 - Dr. Benjamin De Haas (Justus-Liebig-Universität Gießen, Germany) - 'Where' in the ventral stream? Pointers from feature-location tuning and individual salience

Influential models of vision propose a cortical division of labor, with spatial attention ('where') being processed in dorsal areas and the identity of visual objects ('what') in a separate, ventral stream. However, a number of recent findings suggest tight links between object identity and gaze behavior. For instance, 1) where we look in a natural scene is most strongly driven by semantic object attributes and 2) how we study visual objects of expertise matches contingencies between spatial and feature tuning. Here, I will present fMRI data on 2) showing that neuronal populations in the human inferior occipital gyrus (IOG) are organized along correlated gradients of spatial and face-part tuning. I will also present work on 1), showing systematic individual differences in gaze behavior, which ongoing work exploits to probe the neural basis of semantic salience. Finally, I will discuss the hypothesis that the ventral stream is crucial for 'where' we look, as well as 'what' we see. 

For more information about Dr. Benjamin de Haas and his work: https://bendehaas.wordpress.com/


1st May 2019 - Dr. Chistopher Henry (Kohn Lab, Albert Einstein College of Medicine, USA) - Crowding alters features encoding in macaque visual cortex

We are good at judging the visual features of objects in peripheral vision when they are viewed in isolation. However, when these same objects are surrounded by adjacent stimuli in the visual field, local feature discrimination is often impaired. This phenomenon, known as visual crowding, has been extensively studied at the behavioral level in humans, revealing many factors that affect local feature perception. In contrast, little is known at a mechanistic level about how crowding affects the activity of single neurons or populations of neurons at distinct stages of the visual system, which together ultimately give rise to these altered sensory percepts. We studied crowding behaviorally and neurophysiologically in macaque monkeys, an excellent animal model of the human visual system. Animals were trained to perform a fine orientation discrimination task for peripheral targets. Overall, monkeys exhibited strong perceptual crowding; effect size and error patterns were similar to those of human observers.

We then asked how crowding altered the representation of target orientation in neuronal populations at the first stage of cortical processing, primary visual cortex (V1). Crowding stimuli strongly modulated the average firing rate of individual V1 neurons, suppressing most but facilitating others, which resulted in changes of the tuning curves for target orientation. In addition, crowding produced modest but significant changes in both individual neuronal variability (Fano factor) and shared variability (pairwise spike count correlation). At the V1 population level, this resulted in moderate information losses in encoded target orientation under crowding, and was driven largely by the varied change in average neuronal firing rates. We show that both increases and decreases in overall firing under crowding can result in losses of feature information, provided these modulations are of the right form. The extent to which changes in V1 populations under crowding limit local feature perception will depend on how these signals are read out and integrated in downstream cortical circuits. Our results show that the effects of crowding in peripheral vision are already evident in the first stages of cortical processing.

2018

25th January 2018 - Dr. Tomas Knapen (Free University Amsterdan, Netherlands) - Mapping the dark side: retinotopic organization in the default mode network

      The brain's default network (DN) consists of a set of brain regions that consistently show decreases in BOLD signal during task engagement, when most areas show increases in BOLD signal. Recent findings have shown indications that these deactivations play some role in visual processing, but the nature and function of this well-known property of the DN remain unclear. We conducted a population receptive field (pRF) mapping experiment at 7T, in which participants directed their attention to a visual mapping stimulus during strict fixation, performing a colour discrimination task that was titrated to have equal difficulty regardless of stimulus position. This kept task load identical whenever a stimulus was on the screen, and allowed us to contrast task-related and spatially specific signals. 

      We demonstrate that signals in the DN carry retinotopic visual information. Specifically, BOLD decreases in several nodes of the DN are specific to the appearance of a visual stimulus in a circumscribed region of retinotopic space. We estimated population receptive fields of negative amplitude from BOLD time courses in the DMN, and show that a subset of these regions each contains a coherent retinotopic map along the cortical surface. Moreover, this description of spatial preferences in the DN, combined with ongoing activation patterns, allows us to reconstruct (decode) the position of a visual stimulus with a fidelity comparable to that of the known retinotopic maps of the intraparietal and precentral sulcus.

      Our results indicate that spatially specific activations and deactivations synergistically subserve the processing of visual information. DN regions have been shown to selectively activate for social information, autobiographical memory and mind wandering, types of cognition that require an emphasis on processing of internally sourced information. Thus, the balance between activations and deactivations could underpin the preferential processing of externally vs internally sourced information. Furthermore, it is likely that representations in retinotopic and other reference frames coincide in these regions. This overlap would allow local computations to integrate information processing from different reference frames. 

For more information about Dr. Tomas Knapen and his work: tknapen.github.io 


21st February 2018 - Prof. Kalanit Grill-Spector (Standford University, USA) - Neural Mechanisms of the Development of Face Perception: How do brain mechanisms develop from childhood to adulthood leading to better face recognition? 

       There is extensive debate if brain development is due to pruning or growth. Here I will describe results from a series of recent experiments using new MRI methods in children and adults together with analysis of postmortem histology that tested these competing theories.

      Anatomically, we examined if there are developmental increases or decreases to macromolecular tissue in the gray matter and how anatomical development impacts function and behavior. Functionally, we examined if and how neural sensitivity to faces, as well spatial computations by population receptive fields develop from childhood to adulthood. Critically, we tested how these neural developments relate to perceptual discriminability to face identity and looking behavior, respectively.

      Together, our data reveal a tripartite relationship between anatomical, functional, and behavioral development and suggest that emergent brain function and behavior during childhood result from cortical tissue growth rather than pruning. 

For more information about Prof. Kalanit Grill-Spector and her work: http://vpnl.stanford.edu/


22nd February 2018 - Prof. David Brainard (University of Pennsylvania, USA) - UCL WS Stiles memorial lecture

For more information about Prof. David Brainard and his work: https://color.psych.upenn.edu


23rd April 2018 - Prof. Peter Bex (Northeastern University, USA) - Assessment, Simulation and Correction of Binocular Vision Impairment

      Current clinical binocular assessment methods depend primarily on insensitive tests of stereoacuity (e.g. Stereo Fly), suppression (e.g. Worth 4 dot) and ocular alignment (e.g. Cover Test). Recent virtual reality-based approaches to the treatment of binocular vision impairment have enabled much greater control of therapeutic stimuli but require more sensitive assessments of their efficacy over current treatments. We have developed a range of novel tests that quantify the spatial frequency-dependence of contrast sensitivity, inter-ocular suppression and stereo-acuity; and the eye posture-dependence of ocular alignment. These tests take less than 5 minutes each to complete yet show high sensitivity and reliability.

      We show that treatments based on interocular manipulations of blur, contrast, and luminance have profound consequences for oculomotor control and depth perception. Furthermore, these approaches do not address ocular misalignment that may limit treatment outcomes. We show that dichoptic saccade adaptation can transiently induce and reverse interocular alignment and alters depth perception. This work aims to provide a comprehensive framework for the assessment and correction of binocular vision deficits.

For more information about Prof. Peter Bex and his work: http://www.northeastern.edu/bexlab/


24th May 2018 - Dr. Wei Wang (Institute of Neurosciences, Chinese Academy of Sciences, China) - Cortical mechanisms underlying integration of local visual cues to form global representations

      Human and non‐human primates effortlessly see both global and local features of objects in great detail. However, how the cortex integrates local visual cues to form global representations along visual hierarchies remains mysterious, particularly considering a long-standing paradox in vision as neurally encoded complexity increases along the visual hierarchy, the known acuity or resolving power dramatically decreases. Putting it simply, how do we simultaneously recognize the face of our child, while still resolving the individual hairs of her or his eyelashes? Many models of visual processing follow the idea that low-level resolution and position information is discarded to yield high-level representations (including cutting-edge deep learning models). These are themes that are fundamental to conceiving how the brain does sensory transformation! 

      Combining large-scale imaging of high spatial resolution to record the transformation of information across three visual areas simultaneously (V1, V2, and V4) with electrophysiological multi-site laminar recordings, we found a bottom-up cascade of cortical integration of local visual cues as general cortical mechanisms for global representations in primate ventral and dorsal streams. The integrated neural responses are dependent on the sizes and preferences of their receptive fields. Recently, we reveal an unexpected neural clustering preserving visual acuity from V1 into V4, enabling a detailed spatiotemporal separation of local and global features along the object-processing hierarchy, suggesting that higher acuities are retained to later stages where more detailed cognitive behaviour occurs. The study reinforces the point that neurons in V4 (and most likely also in infero-temporal cortex) do not necessarily need to have only low visual acuity, which may begin to resolve the long-standing paradox concerning fine visual discrimination. Thus, our research will prompt further studies to probe how preservation of low-level information is useful for higher-level vision and provide new ideas to inspire the next generation of deep neural network architectures.

For more information about Dr. Wei Wang and his work: http://www.ion.ac.cn/laboratories/int.asp?id=44


20th September 2018 - Dr. Valérie Goffaux (Université Catholique de Louvain, Belgium) - Orientation and spatial frequency encoding in the ventral visual pathway: the case of face perception in humans

      Visual perception results from a complex chain of processes, starting with the selective encoding of spatial frequency and orientation in primary visual cortex. More anterior high-level regions have increasingly larger receptive field making them selective to increasingly more complex shape properties or to specific visual categories such as faces. This high-level specialization has led researchers to focus their investigation on the high-level properties of face perception.

      Our work combines multiple investigation techniques: psychophysics, electrophysiology (scalp EEG, steady-state and temporal generalization), as well as neuroimaging (fMRI of V1 and high-level visual regions) and suggests that the specialization of face processing, though emerging at high-level stages of visual processing, roots into selective ranges of the primary orientation and SF information in V1. These findings encourage the adoption of more integrative approaches to face perception, and vision in general, encompassing both low- and high-level visual mechanisms.

For more information about Dr. Valérie Goffaux and her work: https://sites.uclouvain.be/goffauxlab/index.html


25th October 2018 - Prof. Steven Dakin (The University of Auckland, New Zealand) - Using technology to improve assessment and treatment of vision in children

      Measuring vision is important in a range of clinical settings, from monitoring the effectiveness of gene therapies for blinding eye conditions through to early detection of age-related eye diseases like glaucoma.  My lab uses technologies such as infrared eye tracking and virtual reality to measure and correct vision in groups that can be difficult to assess and/or treat.

      In this talk I will give overviews of several projects – inspired by basic psychophysical research – that deliver more reliable visual assessment, and novel platforms for treatment, of children. This includes: a new set of symbols and tablet-based testing system for measuring acuity, use of involuntary eye movements to estimate contrast sensitivity, and an amblyopia-therapy system using a handheld gaming system. 

For more information about Prof. Steven Dakin and his work: http://www.homepages.ucl.ac.uk/~smgxscd/DakinLab/Main.html


29th November 2018 - Prof. Joshua Solomon (Centre for Applied Vision Research, City University of London, UK) - Visual categorisation of simple stimuli

      Stimuli that vary along quantitative (or ‘prothetic’) continuua can be categorized on the basis of how much neural activity they elicit. Loud and bright stimuli elicit more activity than quiet and dim stimuli. Stimuli that vary qualitatively (either along metathetic continuua or between prothetic continuua) cannot be categorized on this basis. It is conceivable that our brains contain homunculi who read tiny signs attached to each fibre that describe their neurons’ preferred stimuli, but contemporary theorists believe category information to be inherent in the cerebral positions of active neurons. This helps explain why stimulus preferences vary systematically in the cortex, forming multi-dimensional ‘maps’ of position, orientation, and possibly other attributes such as spatial frequency, binocular disparity, and chromaticity. Nonetheless, neurons and channels that can be distinguished on the basis of the stimuli they prefer are still called ‘labelled lines’. Various psychophysical methods exist for quantifying channel selectivity and deciding whether those channels qualify as labelled lines.

      The focus of my talk will be a new model for the 2 x 2 FC paradigm, in which observers must both detect and identify (or categorize) modulations in a visual stimulus. Heretofore, results from this paradigm have either been interpreted in lieu of any models (e.g. ‘labelled lines are implied when threshold for identification equals threshold for detection’) or easily falsifiable high-threshold theories of detection. My new model is based on signal-detection theory, and accommodates a wide range of relationships between detection and identification.

For more information about Prof. Joshua Solomon and his work: http://www.staff.city.ac.uk/~solomon/