XClose

UCL Institute of Ophthalmology

Home
Menu

Current projects

Here you will find some information about all the current research areas that we investigate at the Child Vision Lab.

 

Atypical development of vision

Visual development is shaped by our early visual experiences.

  • For children born with eye conditions that limit their visual experience, vision can develop atypically.
  • The Child Vision Lab works with patients who have retinal conditions to find out how these conditions impact their range of visual skills, such as form and motion perception.

 

We are currently exploring how the absence of cone cells impacts on a variety of visual skills and on brain development.

  • Cone cells are light detecting cells at the back of the eye which provide us with detailed colour vision. Patients lacking these cells have low vision and no colour perception.
  • By examining how this impacts on visual skills known to be controlled by specific brain regions we can learn more about how the eye and brain interact.
  • Comparisons between those born without functioning cones and those who lose them later on in life is also allowing us to understand which visual skills are most susceptible to early vision loss.
  • We are working with clinicians to pioneer our child-friendly MRI and eye tracking technique in child patient groups.
How do children combine sensory information to interact safely with the world around them?

We use a mixture of psychophysics and brain imaging to investigate how children combine the information they receive from their eyes with the information they receive from their other senses. 

  • Our sensory systems continually bombard us with information from multiple sources with variable reliabilities.
  • Adults can combine different types of sensory information (e.g., visual and auditory information about location) into an integrated whole.
  • The combined estimate is more reliable than information from one sense alone.
  • The ability to integrate sensory information does not emerge until surprisingly late in childhood, at around 10 years old.


We are interested in how children use the sensory information they have about the world around them, when making visuomotor decisions.

  • The best decisions about how to act maximise the change of success and minimise the chance of accidents.
  • We use fun touchscreen-computer games to investigate children’s abilities to consider all of these factors when choosing action strategies.
  • Making such decisions, involves a complex interplay between perceptual, motor and “cost” variables.
  • This relates to real-world problems of choosing safe courses of action when engaging in what is known as movement under risk (e.g. crossing a road or playing sports).
Testing babies vision
  • It is hard to test a baby's  vision, as they can't yet talk and have short attention spans!
  • We have developed a computerised test of infant vision which make use of eye-tracking software.
  • This test can quickly asses babies visual acuity.
  • We want to use this method to further investigate how infant vision develops, and hope that it will be used when evaluating the effectiveness of new treatments.
  • We are expanding the scope of this test so that it could also be used to measure babies contrast sensitivity or visual fields.
Multisensory perception after vision loss

The brain develops and changes throughout life, not just in childhood. We are looking at multisensory perception in adults who are adapting to changes in their vision.

  • reduction of vision due to degenerative disease
  • improvement in vision due to treatment 

Navigation with the 'bionic eye'

  • We developed a navigation task to assess patients who had been implanted with a retinal prosthesis  or a ‘bionic eye’.
  • These patients have been blind for a number of years and have thus learned to rely on their non-visual senses.
  • We want to know whether they can use the new prosthetic vision can be combined with their other sensory information to improve their performance in multisensory tasks.

Audio-visual localisation after vision loss

  • We have a purpose built speaker and LED array to investigate how patients combine auditory and visual information in localisation tasks.
  • We have used this setup to investigate how the sensory integration mechanism  changes in response to a progressive loss of vision.
  • We want to know whether the way patients use audio or visual information changes across their visual field, depending on their specific visual impairment.