At UCL Computer Science, our Immersive and Visual Computing research combines advanced theory with creativity and a deep understanding of how people perceive the world. Together, we’re transforming how people connect with and navigate digital environments.
Immersive and visual research is about creating digital experiences that feel natural and meaningful to people. It focuses on understanding how we see, hear and interact with the world and how we use that knowledge to build technologies that improve our abilities and experiences – and open up new possibilities in areas like healthcare and entertainment.
Our research spans the entire technology stack – from hardware to algorithms to user-experience – encompassing capture, modelling and display of multisensory worlds. We focus on building rich and expressive environments that users can step into and explore, whether scanned from real life or created from hypothetical examples.
This work involves modelling only phenomena most useful for human perception, including real-time capture for augmented reality and offline capture of complex scenes that can be brought to life and expanded for creative applications.
Collaboration is key. We work with anthropologists and artists to help them tell stories, we partner with global businesses to help them develop new products, and we push the boundaries of accessibility, harnessing open-source technology so everyone can get involved.

What kind of projects are we working on at UCL Computer Science?
AI-generated imagery and video
We're pioneering techniques to create AI-generated content through simple text prompts. In fact, our research has made it possible to create low-cost 3D models of people from 2D images.
This has contributed to advances that companies like Synthesia – a £2 billion unicorn co-founded by UCL Computer Science Professor, Lourdes Agapito – have used to develop platforms that allow video creation in 140+ languages.
It’s also a great example of how we’re nurturing the entrepreneurial aspirations of our staff and students and building bridges between academia and business.
Immersive virtual environments
How do you make virtual reality feel truly real? What makes people feel present in digital spaces? How can they work together naturally in virtual worlds?
Through our Immersive Virtual Environments Laboratory, we’re developing open-source tools like Ubiq that create more meaningful virtual experiences.
Our software has enabled pharmacy students to explore virtual labs and visualise molecular structures in 3D, while cardiology applications are helping medical professionals better understand heart conditions and shape the future of cardiovascular science.
Multi-sensory interfaces and acoustic technologies
We're developing novel ways to weave together computer graphics with physics, psychology and engineering. Our Multi-Sensory Devices group, for example, is advancing acoustic levitation technology using sound waves to make particles float in mid-air.
This research goes beyond visual displays to create true multi-sensory experiences with applications ranging from holographic displays to targeted drug delivery.
What’s coming next?
We're examining ways to make advanced visual software and hardware technologies more cost-effective and so more accessible to more people.
Among other things, our research into virtual collaboration will, ultimately, mean less travel and a lower environmental impact, while our ground-breaking work in perceptual rendering aims to usher in a new generation of ever-more realistic and efficient displays.
Through industry partnerships, academic collaborations and cross-disciplinary approaches, we’re also sparking innovation in how people understand and interact with digital information across applications that span healthcare, education, entertainment and beyond.