XClose

UCL Computer Science

Home
Menu

Touchless Computing: UCL MotionInput 3

MotionInput (MI3) is a pioneering suite of touchless computing technologies developed by UCL Computer Science students in collaboration with leading industry partners, Intel, Microsoft, IBM, Google, the NHS with Great Ormond Street Hospital for Children (GOSH DRIVE unit) and the UCLH Institute for Child Health.

YouTube Widget Placeholderhttps://www.youtube.com/watch?v=jtKWr7FQ-bo

Video credit: Intel 
 

Latest updates

MotionInput v.3.4 featured on BBC News and BBC Click

The latest version of MotionInput, v3.4, was recently highlighted on BBC News and BBC Click (13:45). This updated software introduces 24 new AI tools that enhance its ability to interpret users' movements and sounds, making it more useful for both accessibility and gaming. It now includes technologies that analyse where a user is looking and their body movements.

Version 3.4 will be available globally in May 2024. For more details on these AI-enhanced features, please visit our new spin-out company, MotionInput Games.

Visit MotionInput Games

MI3 Facial Navigation v3.2 available on the Microsoft Store

UCL Computer Science academics and students have developed Assistive Technology software, free for users to navigate their Windows PCs with a webcam. It tracks the users nose as the mouse pointer, and with either with facial gestures or with speech for mouse clicking (and more!).

For further instructions and guidance on how to use the software, visit the app's homepage www.facenav.org.

Download via facenav.org

What is UCL MotionInput?

UCL MotionInput, now in its 3rd generation, is a major software package launched globally for touchless computer interactions of nearly all ranges of human movement using just a webcam and PC.

The platform allows users to control and interact with a computer using physical gestures captured by their web camera. It replaces input functionality normally provided with a keyboard, mouse or joypad, and instead uses movements of their hands, head, eyes, mouth, or even their full body joints.

How UCL MotionInput started

UCL MotionInput was first created in response to the global COVID-19 pandemic.

As lockdown forced staff and students away from campus, UCL launched a range of initiatives to make teaching remotely work for all. In particular, UCL began conversations with the UK’s NHS, Intel, Microsoft and IBM about possible additions for the NEWS2 protocol, a system used by the NHS to triage patients quickly, but required physical touch and examination.

At the same time, Machine Learning technologies such as Tensorflow, and Computer Vision techniques such as Convolutional Neural Network (CNN) Training models were accelerating in their development.

This led to UCL MotionInput being proposed as a final year project, through UCL Computer Science’s Industry Exchange Network (IXN), in September 2020.

Version 3

The 3rd generation of MotionInput now enables a wide range of combinations of human movements, also with federated and offline speech processing for live captioning and speech commands, for existing applications. It has several orders of magnitude improvements in efficiency and has an architecture for the supporting growth of touchless computing apps as an ecosystem. It is also starting the journey towards building for multiple platforms.

YouTube Widget Placeholderhttps://www.youtube.com/watch?v=dbFXT8nJvX4&list=PLItOc9xhnCnidFTWh95oh2...

What are the benefits?

The software has many features and benefits.

Any screen is now a multi-touch touchscreen

Multi-touch touchscreens can be used anywhere, in schools and hospitals as well as at home.

A lecturer can control PowerPoint without reaching for the laptop. A chef can navigate pages without touching the screen. A surgeon can review and annotate CT scan images, without needing assistance to navigate the computer.

Facial navigation

Facial navigation enables a greater degree of accessibility by letting a user navigate either with their nose, if they have spinal conditions, or with their eyes for more serious conditions.

The user can mix this with various keywords and customise their own applications with shortcut phrases.

For example, saying “click”, “right click” and “double click” does those actions. The same can be triggered by using facial features such as smiling, raising an eyebrow or opening the mouth.

VR without the VR

It enables you to control your existing Windows games by movements and speech, either by mimicking your game control pad with your two hands, or by setting hit targets to hit in the space around you.

It puts you into your games, where you can walk on the spot as your game character to move and respond with actions within your games.

In-air keyboard and drawing

For industrial settings, it will let you type in a virtual keyboard, or draw and annotate with digital inking and depth, in the air.

Live captioning, dictation and phrase shortcuts 

It can let users make use of live captioning and call out phrases for keyboard shortcuts, in any existing Windows applications and games. The phrases can be defined by the user.

Who is involved in the project?

UCL MotionInput team standing in front of the UCL portico building

Academic

UCL MotionInput 3 has the following project academics:

  • Prof Dean Mohamedally (Project lead and Director)
  • Prof Graham Roberts
  • Mrs Sheena Visram
  • Dr Atia Rafiq (honorary)
  • Prof Joseph Connor (honorary) at UCL

Industry

  • Prof Lee Stott (Microsoft)
  • Prof John McNamara (IBM)
  • Prof Neil Sebire (NHS/GOSH DRIVE)
  • Prof Costas Stylianou (Intel)
  • Prof Phillippa Chick (Intel)
  • Cathy Cummings (Intl. Alliance of ALS/MND Associations)

Senior directors, engineers, designers, and testers from Intel, IBM, Microsoft, Google and the NHS (including GOSH DRIVE) have participated in several years of development at various stages with the students, engaging them, enabling them to present their findings and giving them feedback to refine the solutions in the project.

UCL Students

University College London students, as part of the UCL Industry Exchange Network programme:

Version 1 (Summer 2020)

  • Lu Han
  • Emil Almazov

Version 2 (Summer 2021)

  • Ashild Kummen
  • Ali Hassan
  • Chenuka Ratwatte
  • Guanlin Li
  • Quianying Lu
  • Robert Shaw
  • Teodora Ganeva
  • Yang Zou

Version 3 (Oct 2021-June 2022)

Version 3.0-3.02

  • Sinead Tattan – Lead Student Project Architect (Final Year CS)
  • Carmen Meinson – Lead Software Development Architect (Second Year CS)
  • Aaisha Niraula
  • Abinav Baskar
  • Adi Bozzhanov
  • Alexandros Theofanous
  • Ali Amiri Souri
  • Andrzej Szablewski
  • Aryan Jani
  • Aryan Nevgi
  • Ben Threader
  • Chris Zhang
  • Clarissa Sandejas
  • Daniel Rempel
  • Eesha Irfan
  • Eva Miah
  • Fawziyah Hussain
  • Felipe Jin Li
  • James Zhong
  • Jason Ho
  • Jianxuan Cao
  • Jiaying Huang
  • Kaiwen Xue
  • Karunya Selvaratnam
  • Keyur Narotomo
  • Kyujin Sim (Chris)
  • Lama Alluwaymi
  • Mari Takeuchi
  • Michelle Chan
  • Oluwaponmile Femi-Sunmaila
  • Phoenix Sun
  • Pun Kamthornthip
  • Radu-Bogdan Priboi
  • Rakshita Kumar
  • Raquel Sofia Fernandes Silva
  • Samuel Emilolorun
  • Elyn See Kailin
  • Siam Islam
  • Sibghah Khan
  • Sricharan Sanakkayala
  • Thomas Langford
  • Tianhao Chen
  • Yadong(Adam) Liu
  • Yan Tung Cheryl Lai
  • Zemiao Huang

Versions 3.03-3.1 (June 2022-September 2022)

  • Anelia Gardarzhieva (Lead Architect 3.1)
  • Gincarlo Grasso
  • Thomas Langford
  • Jiahui Shi
  • Vivek Vijay
  • Elynor Kamil

Version 3.11 Windows Store Build (December 2022)

  • Alex Clarke (Lead Architect 3.11 In-Air Multitouch and 3.2 Facial Navigation)
  • Joseph Marcillo-Coronado (Compiler and Windows Store Team Lead)
  • Nerea Sainz De La Maza Melon  (Compiler and Windows Store Team Lead 3.2)
  • Anelia Gardarzhieva (Lead Teaching Assistant and MFC Base Code)
  • Abriele Qudsi
  • Chaitu Nookala

Version 3.2 Full Team (Sept 2022-April 2023)

  • Anelia Gardarzhieva and Mohseen Hussain (Joint Lead Architects v3.2)
  • Aaryaman Sharma
  • Abdullah Ahmed
  • Abdurrahmaan Ali
  • Abid Ali
  • Abriele Qudsi
  • Adil Omar-Mufti
  • Aishwarya Bandaru
  • Ajay Mahendrakumaran
  • Akram Ziane
  • Alex Clarke
  • Alexandra Irimia
  • Amir Solanki
  • Arvind Sethu
  • Aryan Agarwal
  • Baixu Chen
  • Calin Hadarean
  • Can Ertugrul
  • Chaitu Nookala
  • Chan Lee
  • Chidinma Ezeji
  • Chris Zhang
  • Damian Ziaber
  • David Nentwig
  • Donghyun Lee
  • Dongyeon Park
  • Eloise Vriesman
  • Fabian Bindley
  • Filip Trhlik
  • Filipp Kondrashov
  • Gauri Desai
  • Ghalia Alsayed
  • Hadi Khan
  • Harryn Oh
  • Imaad Zaffar
  • Jason Kim
  • Jianheng Huo
  • Jie Li
  • Joseph Marcillo-Coronado
  • Julia Xu
  • Kaartik Nagarajan
  • Liv Urwin
  • Luis Rodrigues Vieira
  • Luke Jackson
  • Maria Toma
  • Mateusz Krawczynski
  • Minghui Cai
  • Molly Zhu
  • Nandini Chavda
  • Nerea Sainz De La Maza Melon
  • Peter Xu
  • Pooja Chhaya
  • Rebecca Conforti
  • Robin Stewart
  • Setareh Eskandari
  • Taha Chowdhury
  • Takuya Boehringer
  • Tina Hou
  • Vincent King
  • Weiyi Zhang
  • Yidan Zhu
  • Youngwoo Jung
  • Zhaoyan Dong
  • Zineb Flahy