XClose

UCL Computer Science

Home
Menu

Touchless Computing: UCL MotionInput 3

MotionInput is a pioneering touchless computing technology developed by UCL Computer Science students in collaboration with leading industry partners, Intel, Microsoft, IBM, Google, the NHS with Great Ormond Street Hospital for Children (GOSH DRIVE unit) and the UCLH Institute for Child Health.

UCL MotionInput screenshot

What is UCL MotionInput?

UCL MotionInput, now in its 3rd generation, is a major software package launched globally for touchless computer interactions of nearly all ranges of human movement using just a webcam and PC.

The platform allows users to control and interact with a computer using physical gestures captured by their web camera. It replaces input functionality normally provided with a keyboard, mouse or joypad, and instead uses movements of their hands, head, eyes, mouth, or even their full body joints.

How UCL MotionInput started

UCL MotionInput was first created in response to the global COVID-19 pandemic.

As lockdown forced staff and students away from campus, UCL launched a range of initiatives to make teaching remotely work for all. In particular, UCL began conversations with the UK’s NHS, Intel, Microsoft and IBM about possible additions for the NEWS2 protocol, a system used by the NHS to triage patients quickly, but required physical touch and examination.

At the same time, Machine Learning technologies such as Tensorflow, and Computer Vision techniques such as Convolutional Neural Network (CNN) Training models were accelerating in their development.

This led to UCL MotionInput being proposed as a final year project, through UCL Computer Science’s Industry Exchange Network (IXN), in September 2020.

How has MotionInput been developed?

Version 1 

Version 1 (V1) was released in January 2021 and developed by 2 final year Computer Science BSc students, Lu Han and Emil Almazov, supervised by Prof Dean Mohamedally.

MotionInput V1 had support for 2 modes-of-interaction: A desk mode, which allowed for the user to replace their mouse control with holding a pen in the air and using coloured surgical gloves, and Exercises mode, which allowed the user to interact through exercises, like walking on the spot.

Version 2 - Summer 2021

The project was then taken over by a group of Computer Science MSc students (see Microsoft article below), leading to the release of MotionInput V2, which redeveloped and expanded functionality to 4 modes-of-interaction, including hands, full body, eyes and head. Surgical gloves were no longer needed.

Individual modes of interactions were designed, and demonstrated to Microsoft Windows 11 team designers and engineers.

YouTube Widget Placeholderhttps://youtu.be/mg-fugUSpdI

Version 3 – 2021-22

The 3rd generation of MotionInput now enables a wide range of combinations of human movements, also with federated and offline speech processing for live captioning and speech commands, for existing applications. It has several orders of magnitude improvements in efficiency and has an architecture for the supporting growth of touchless computing apps as an ecosystem. It is also starting the journey towards building for multiple platforms.

YouTube Widget Placeholderhttps://www.youtube.com/watch?v=dbFXT8nJvX4&list=PLItOc9xhnCnidFTWh95oh2...

What are the benefits?

The software has many features and benefits.

Any screen is now a multi-touch touchscreen

Multi-touch touchscreens can be used anywhere, in schools and hospitals as well as at home.

A lecturer can control PowerPoint without reaching for the laptop. A chef can navigate pages without touching the screen. A surgeon can review and annotate CT scan images, without needing assistance to navigate the computer.

Facial navigation

Facial navigation enables a greater degree of accessibility by letting a user navigate either with their nose, if they have spinal conditions, or with their eyes for more serious conditions.

The user can mix this with various keywords and customise their own applications with shortcut phrases.

For example, saying “click”, “right click” and “double click” does those actions. The same can be triggered by using facial features such as smiling, raising an eyebrow or opening the mouth.

VR without the VR

It enables you to control your existing Windows games by movements and speech, either by mimicking your game control pad with your two hands, or by setting hit targets to hit in the space around you.

It puts you into your games, where you can walk on the spot as your game character to move and respond with actions within your games.

In-air keyboard and drawing

For industrial settings, it will let you type in a virtual keyboard, or draw and annotate with digital inking and depth, in the air.

Live captioning, dictation and phrase shortcuts 

It can let users make use of live captioning and call out phrases for keyboard shortcuts, in any existing Windows applications and games. The phrases can be defined by the user.

Who is involved in the project?

UCL MotionInput team standing in front of the UCL portico building

Academic

UCL MotionInput 3 has the following project academics:

  • Prof Dean Mohamedally (Project lead and Director)
  • Prof Graham Roberts
  • Mrs Sheena Visram
  • Dr Atia Rafiq (honorary)
  • Prof Joseph Connor (honorary) at UCL.

Industry

The industry clients are:

  • Prof Lee Stott (Microsoft)
  • Prof John McNamara (IBM)
  • Prof Neil Sebire (NHS/GOSH DRIVE)
  • Costas Stylianou (Intel)
  • Phillippa Chick (Intel)

Senior directors, engineers, designers, and testers from Intel, IBM, Microsoft, Google and the NHS (including GOSH DRIVE) have participated in the two years of development at various stages with the students, engaging them, enabling them to present their findings and giving them feedback to refine the solutions in the project.

UCL Students for Version 3

54 students across undergraduate and postgraduate programmes and year groups became the first Touchless Computing group at UCL Computer Science to contribute to this project:

  • Sinead Tattan – Lead Student Project Architect (Final Year CS)
  • Carmen Meinson – Lead Software Development Architect (Second Year CS)
  • Aaisha Niraula
  • Abinav Baskar
  • Adi Bozzhanov
  • Alexandros Theofanous
  • Ali Amiri Souri
  • Andrzej Szablewski
  • Aryan Jani
  • Aryan Nevgi
  • Ben Threader
  • Chris Zhang
  • Clarissa Sandejas
  • Daniel Rempel
  • Eesha Irfan
  • Eva Miah
  • Fawziyah Hussain
  • Felipe Jin Li
  • James Zhong
  • Jason Ho
  • Jianxuan Cao
  • Jiaying Huang
  • Kaiwen Xue
  • Karunya Selvaratnam
  • Keyur Narotomo
  • Kyujin Sim (Chris)
  • Lama Alluwaymi
  • Mari Takeuchi
  • Michelle Chan
  • Oluwaponmile Femi-Sunmaila
  • Phoenix Sun
  • Pun Kamthornthip
  • Radu-Bogdan Priboi
  • Rakshita Kumar
  • Raquel Sofia Fernandes Silva
  • Samuel Emilolorun
  • Elyn See Kailin
  • Siam Islam
  • Sibghah Khan
  • Sricharan Sanakkayala
  • Thomas Langford
  • Tianhao Chen
  • Yadong(Adam) Liu
  • Yan Tung Cheryl Lai
  • Zemiao Huang