4 YEAR PhD IN NEUROSCIENCE
Department of Psychology
My research group works on vision with a particular focus on the perception of visual motion. Computational modelling of visual processes with detailed psychophysical experiments are combined to tackle problems such as how we encode local velocities from the changing images on the retina, how local motion vectors are grouped together to deliver object motion, how motion allows spatial prediction and influences apparent spatial location and how motion adaptation affects time perception. At a higher level we also study the encoding of complex motion patterns in natural scenes and in the communicative and expressive non-rigid movement of the face.
1. The neural basis of the encoding of identity from face motion using expressions projected onto photorealistic average avatars.
2. Forward models in perception: the role of spatial prediction in motion coding.
3. Fusion - the complement to disparity: how do we fuse the two eyes views?
Arnold, D.H. & Johnston, A. (2003) Motion induced spatial conflict. Nature, 425, 181-184.
Johnston, A., Arnold, D.H. and Nishida, S. (2006) Spatially localised distortions of event time. Current Biology, 16, 472-477.
Roach, N.W., McGraw, P.V. & Johnston, A. (2011) Visual motion induces a forward prediction of spatial pattern. Current Biology, 21 (9) 740-745
Cook, R. Johnston A. and Heyes C. (2012) Facial Self-Imitation : Objective Measurement Reveals No Improvement Without Visual Feedback Psychological Science published online 29 November 2012 DOI: 10.1177/0956797612452568