XClose

UCL Institute of Ophthalmology

Home
Menu

Vision@UCL Talks

Visual@UCL Talks is a monthly-Talks Series featuring eminent national and international vision science speakers to stimulate connections in vision research.

Why and What For?

      Vision science is an inherently interdisciplinary scientific field that spans psychophysics, imaging and computational modelling methods. Work in this area has high societal impact through its crucial contributions to clinical research on eye disease, computer vision, and many other applied fields. To celebrate UCL's leading role in vision science, UCL Institutions at the forefront of this field have now joined forces to organise a new talks series called Vision @UCL Talks. The series features eminent international vision science speakers, who bridge the fields of psychophysics, fMRI, and computational modelling in vision science. Funding for these sessions has kindly been provided by the NIH Moorfields Biomedical Research Centre, UCL Centre for Computation, Mathematics and Physics in the Life Sciences and Experimental Biology, and Cambridge Research Systems.

      Vision @UCL Talks is a free event and is open to everyone (PhD student, researchers, clinicians, non-scientific public interested in science, etc). Each talk is preceded by a science workshop for PhD students from all UCL departments, and followed by drinks.

Subscribe to our mailing list!

      If you want to be added to our mailing list and receive all our updates about the talks, please follow this link: http://eepurl.com/dhA525

When and Where?

     Time and location of each talk are giving in Upcoming Talks.

Upcoming Talks

25th January 2018 at 4pm - Dr. Tomas Knapen (Free University Amsterdan, Netherlands) - Mapping the dark side: retinotopic organization in the default mode network

Location: UCL Division of Psychology and Language Sciences - 26 Bedford Way, Bloomsbury, London WC1H 0AP - Room G03
Time: 16:00-17:30

      The brain's default network (DN) consists of a set of brain regions that consistently show decreases in BOLD signal during task engagement, when most areas show increases in BOLD signal. Recent findings have shown indications that these deactivations play some role in visual processing, but the nature and function of this well-known property of the DN remain unclear. We conducted a population receptive field (pRF) mapping experiment at 7T, in which participants directed their attention to a visual mapping stimulus during strict fixation, performing a colour discrimination task that was titrated to have equal difficulty regardless of stimulus position. This kept task load identical whenever a stimulus was on the screen, and allowed us to contrast task-related and spatially specific signals. 

      We demonstrate that signals in the DN carry retinotopic visual information. Specifically, BOLD decreases in several nodes of the DN are specific to the appearance of a visual stimulus in a circumscribed region of retinotopic space. We estimated population receptive fields of negative amplitude from BOLD time courses in the DMN, and show that a subset of these regions each contains a coherent retinotopic map along the cortical surface. Moreover, this description of spatial preferences in the DN, combined with ongoing activation patterns, allows us to reconstruct (decode) the position of a visual stimulus with a fidelity comparable to that of the known retinotopic maps of the intraparietal and precentral sulcus.

      Our results indicate that spatially specific activations and deactivations synergistically subserve the processing of visual information. DN regions have been shown to selectively activate for social information, autobiographical memory and mind wandering, types of cognition that require an emphasis on processing of internally sourced information. Thus, the balance between activations and deactivations could underpin the preferential processing of externally vs internally sourced information. Furthermore, it is likely that representations in retinotopic and other reference frames coincide in these regions. This overlap would allow local computations to integrate information processing from different reference frames. 

For more information about Dr. Tomas Knapen and his work: tknapen.github.io 

21st February 2018 at 4pm - Prof. Kalanit Grill-Spector (Standford University, USA) - Neural Mechanisms of the Development of Face Perception

 Location: UCL Division of Psychology and Language Sciences - 26 Bedford Way, Bloomsbury, London WC1H 0AP - Room G03
Time: 16:00-17:30

How do brain mechanisms develop from childhood to adulthood leading to better face recognition? 

      There is extensive debate if brain development is due to pruning or growth. Here I will describe results from a series of recent experiments using new MRI methods in children and adults together with analysis of postmortem histology that tested these competing theories.

      Anatomically, we examined if there are developmental increases or decreases to macromolecular tissue in the gray matter and how anatomical development impacts function and behavior. Functionally, we examined if and how neural sensitivity to faces, as well spatial computations by population receptive fields develop from childhood to adulthood. Critically, we tested how these neural developments relate to perceptual discriminability to face identity and looking behavior, respectively.

      Together, our data reveal a tripartite relationship between anatomical, functional, and behavioral development and suggest that emergent brain function and behavior during childhood result from cortical tissue growth rather than pruning. 

For more information about Prof. Kalanit Grill-Spector and her work: http://vpnl.stanford.edu/

22nd February 2018 at 5pm - Prof. David Brainard (University of Pennsylvania, USA) - UCL WS Stiles memorial lecture 

Location: UCL Union - 25 Gordon Street, London WC1H 0AY  - Room E28 (Harrie Massey LT)
Time: 17:00-19:30

For more information about Prof. David Brainard and his work: https://color.psych.upenn.edu/

23rd April 2018 at 4.30pm - Prof. Peter Bex (Northeastern University, USA) - Assessment, Simulation and Correction of Binocular Vision Impairment

Location: UCL Division of Psychology and Language Sciences - 26 Bedford Way, Bloomsbury, London WC1H 0AP - Room G03
Time: 16:30-18:00

Assessment, Simulation, and Correction of Binocular Vision Impairment

      Current clinical binocular assessment methods depend primarily on insensitive tests of stereoacuity (e.g. Stereo Fly), suppression (e.g. Worth 4 dot) and ocular alignment (e.g. Cover Test). Recent virtual reality-based approaches to the treatment of binocular vision impairment have enabled much greater control of therapeutic stimuli but require more sensitive assessments of their efficacy over current treatments. We have developed a range of novel tests that quantify the spatial frequency-dependence of contrast sensitivity, inter-ocular suppression and stereo-acuity; and the eye posture-dependence of ocular alignment. These tests take less than 5 minutes each to complete yet show high sensitivity and reliability.

      We show that treatments based on interocular manipulations of blur, contrast, and luminance have profound consequences for oculomotor control and depth perception. Furthermore, these approaches do not address ocular misalignment that may limit treatment outcomes. We show that dichoptic saccade adaptation can transiently induce and reverse interocular alignment and alters depth perception. This work aims to provide a comprehensive framework for the assessment and correction of binocular vision deficits.

For more information about Prof. Peter Bex and his work: http://www.northeastern.edu/bexlab/

24th May 2018 at 4.00pm - Dr. Wei Wang (Institute of Neurosciences, Chinese Academy of Sciences, China) - Cortical mechanisms underlying integration of local visual cues to form global representations

Location: UCL Division of Psychology and Language Sciences - 26 Bedford Way, Bloomsbury, London WC1H 0AP - Room G03
Time: 16:00-17:30

Cortical mechanisms underlying integration of local visual cues to form global representations

Human and non‐human primates effortlessly see both global and local features of objects in great detail. However, how the cortex integrates local visual cues to form global representations along visual hierarchies remains mysterious, particularly considering a long-standing paradox in vision as neurally encoded complexity increases along the visual hierarchy, the known acuity or resolving power dramatically decreases. Putting it simply, how do we simultaneously recognize the face of our child, while still resolving the individual hairs of her or his eyelashes? Many models of visual processing follow the idea that low-level resolution and position information is discarded to yield high-level representations (including cutting-edge deep learning models). These are themes that are fundamental to conceiving how the brain does sensory transformation! 

Combining large-scale imaging of high spatial resolution to record the transformation of information across three visual areas simultaneously (V1, V2, and V4) with electrophysiological multi-site laminar recordings, we found a bottom-up cascade of cortical integration of local visual cues as general cortical mechanisms for global representations in primate ventral and dorsal streams. The integrated neural responses are dependent on the sizes and preferences of their receptive fields. Recently, we reveal an unexpected neural clustering preserving visual acuity from V1 into V4, enabling a detailed spatiotemporal separation of local and global features along the object-processing hierarchy, suggesting that higher acuities are retained to later stages where more detailed cognitive behaviour occurs. The study reinforces the point that neurons in V4 (and most likely also in infero-temporal cortex) do not necessarily need to have only low visual acuity, which may begin to resolve the long-standing paradox concerning fine visual discrimination. Thus, our research will prompt further studies to probe how preservation of low-level information is useful for higher-level vision and provide new ideas to inspire the next generation of deep neural network architectures.

For more information about Dr. Wei Wang and his work: http://www.ion.ac.cn/laboratories/int.asp?id=44

21st September 2018 - Dr. Valérie Goffaux (Université Catholique de Louvain, Belgium) - TBA

Location: TBA
Time: TBA

For more information about Dr. Valérie Goffaux and her work: https://sites.uclouvain.be/goffauxlab/index.html