An Associate Professor in UCL Computer Science, Chengxu Zhou, has been awarded an NVIDIA Academic Grant to support ongoing work in humanoid robotics, focused on real-time, audio-driven whole-body motion.
The award provides dedicated compute resources for both training and deployment, including two NVIDIA RTX PRO 6000 GPUs and two Jetson AGX Orin devices. This additional hardware will accelerate experimentation by shortening training cycles and enabling faster iteration between simulation and real-robot testing.
Audio-driven movement for humanoid robots
The grant supports the Beat-to-Body project, which explores how humanoid robots can listen to sound and respond with expressive, physically plausible and safe whole-body movement. Rather than relying on pre-scripted choreography, the system allows a robot to adapt live to rhythmic and tonal cues such as tempo, accents and changes in loudness.
For example, a steady beat from clapping or music can influence a robot’s stepping rhythm and upper-body motion, while variations in sound quality can shape different movement styles, from sharper, percussive actions to smoother, more fluid motion. The aim is responsive behaviour that adjusts in real time as the audio changes.
From simulation to real-time interaction
A key aspect of the work is its audio-first, on-robot execution. Training is carried out at scale in simulation using GPU compute, while the Jetson hardware enables low-latency inference directly on the robot, reducing reliance on off-board processing and supporting responsive reactions to sound cues.
Applications and future work
Within robotics research, the project contributes to expressive whole-body control and human–robot interaction. Potential applications extend to interactive exhibits and performance contexts, and in the longer term could support simple coordination between multiple robots using shared audio cues.
The research is being led within UCL’s Humanoid Robotics Lab, which focuses on learning, control and interaction for humanoid systems. Immediate next steps include scaling up simulation-based training and demonstrating early closed-loop, audio-responsive behaviours on a humanoid platform.
More information about the group’s work is available on the Humanoid Robotics Lab website.