VR/AR simulations of sight loss
I am using virtual/augmented reality devices, such as Google Cardboard, to create gaze-cointingent simulations of visual impairment. The technology is compatible with all major phones and VR devices, and supports both virtual environments, and 'Augmented Reality' (e.g., via the phone's camera). A selection of screenshots are shown on the right, indicating the sorts of image processing techniques that we have developed.
Overview and validation:
I am studying whether glaucoma patients are willing and able to perform visual field testing at home, using our own custom perimeter:
Eyecatcher is a button-less 'eye-movement' perimeter, in which the patient simply has to look at lights as they appear on the screen. A near-infrared camera tracks their eyes and determines where to position the stimulus (relative to the current point of fixation), and whether or not a light was seen (looked at).
I am exploring Eyecatcher as a way of quickly triaging patients on arrival to busy glaucoma clinics:
The pCSF is a fun, tablet-based test of the Contrast Sensitivity Function [CSF]. The child's task is to simply 'pop' (press) Gabor patches as they bounce around the screen. Behind the scenes, a Bayesian adaptive algorithm (QUEST+) is used to estimated detection thresholds for each spatial frequency.
Lapses in concentration can result in misleading test data, and are a particular problem in children and certain clinical populations:
I am interested in using affective computing techniques (e.g., machine learning) to detect and adjust for lapses in concentration in real time:
The world's first fully-automated infant acuity test, using a computer monitor and remote eye-tracking to perform preferential looking:
I am interested in developing rapid behavioural measures of vision, suitable for use in infants and children. Remote eye-tracking can be used to locate stimuli precisely on the retina, and to record eye-movement responses. Self-calibrating monitors can be used to generate precsisely calibrated stimuli. While efficienct (Maximum A Posteriori) psychophysical algorithms can be used to rapidly and accute determine various perceptual thresholds (e.g., the dimmest light that the observer is able to detect/discriminate).
Acuity refers to the finest spatial data an observer can resolve. In adults, this is typically measured by asking the patient to read letters of diminishing size. An 'infant friendly' alternative is to find the finest black-and-white grating that can be distinguished from an equiluminant grey background.For more details: Jones et al. (2014). Automated measurement of resolution acuity in infants using remote eye-tracking, IOVS, 55(12):8102-8110. pdf
A more advanced use of the eye-tracker is to position stimuli relative to the patient's current point-of-fixation. In this way, the patient's sensitivity to light can be mapped-out, across their visual field. This produces a 'heatmap', of vision loss. We all have one 'blindspot': an area of the mammalian retina where there are no functioning photoreceptors. Some patients may exhibit additional 'scotoma', due to illness or injury.
The same basic technology can also measure chromatic sensitivity functions. A background of random noise prevents observers from exploiting small differences in luminance; while large patches allow observers with low acuity to perform the test.
The CSF measures the smallest modulation in contrast (the faintest black-and-white lines) that can be detected for various levels of spatial frequency (fineness of lines). To make the test as rapid and reliable as possible, we are using the new QUEST+ algorithm to efficiently estimate thresholds across multiple spatial scales (the red dashed line).For more details: Farahbakhsh et al. (2014). Psychophysics with children: Evaluating the use of maximum likelihood estimators in children aged 4 – 15 years (QUEST+), J. Vis., 19:22. link pdf
The temporal equivalent of the CSF: the tCSF measures the most rapid modulation in time that the observer can detect (after which point, the light appears to stay constant). We can use Silent Substitution to target particular classes of photoreceptor.
I am interested in understanding how sensory information (e.g., two sounds, or sight and sound) are combined:
In particular, how the ability to combine information develops in childhood:
And how the ability to combine information is affected by sensory impairment:
Practice improves performance on many basic auditory tasks. However, while the phenomenon of auditory perceptual learning is well established, little is known about the mechanisms underlying such improvements. What is learned during auditory perceptual learning? In my PhD, I attempted to answer this question by applying models of performance to behavioural response data, and examining which parameters change with practice.
On a simple pure tone discrimination task, learning was shown to represent a reduction in internal noise:
However, in a more complex auditory detection task, learning and development were shown to also involve improvements in listening strategy, with listeners becoming better able to selectively-attend to task-relevant information.
Finally, task performance was shown to constrained not just by the strength of the sensory evidence, but by non-sensory factors such as bias and attentiveness.