XClose

Rough Path and Machine Learning Research Group

Home
Menu

Advancements in Time Series Analysis for Computer Vision: Techniques, Applications, and Challenges

This workshop focuses on cutting-edge research on sequential machine learning methodologies and their applications in the field of computer vision.

computer_vision_topic
Time series analysis plays a crucial role in various computer vision applications, including but not limited to action recognition, video surveillance, human behavior understanding, SLAM and video generation. Analysing temporal data in different computer vision applications requires specialised techniques that can capture temporal dependencies, motion dynamics, and spatiotemporal patterns effectively.  

This seminar will provide an overview of recent advancements in time series analysis techniques tailored for computer vision tasks. It will cover a broad spectrum of advanced methodologies along with their strengths and limitations in handling temporal data. Furthermore, the seminar will delve into applications of time series analysis in computer vision, showcasing real-world use cases and success stories across various domains. Attendees will gain insights into how recent time series analysis address challenges in various computer vision fields, engaging in discussions on potential solutions and future research directions.

Registration Link

https://forms.gle/XgxXc8pYwtRRhZgD7

Event information

Time: 13:00 - 17:00, 20 August, 2024 (Tuesday).

Location: Gordon Street 25 (fifth floor), Lecture Theatre, Maths 500, UCL. 

Funding acknowledgement

The workshop is supported by the EPSRC under the program grant EP/S026347/1 (DataSig).

Organiser

Hao Ni & Lei Jiang (UCL), Xin Zhang (SCUT) and Jingjing Deng (Durham Univerisity)  

Speaker & Schedule

12:30    Arrivals 

13:00 -13:45     Prof. Yizhe Song (Surrey University)

TBC

13:45 - 14:20    Dr. Oya Celiktutan (King's College London)

The Role of Human Behaviour in Building Social Robots

Robots are progressively moving out from research laboratories into real-world human environments, envisioned to provide companion care for elderly people, teach children with special needs, assist people in their day-to-day tasks at home or work, and offer service in public spaces. All these practical applications require that humans and robots work together in human environments, where interaction is unavoidable. Therefore, there has been an increasing effort towards advancing the perception and interaction capabilities of robots, making them recognise, adapt to, and respond to human behaviours and actions, with increasing levels of autonomy. Given this context, in this talk, I will give an overview of our ongoing research activities on the understanding and generation of human behaviour from a robotics perspective. Particularly, I will present examples from our ongoing work on how to make robots navigate in crowded social spaces, learn new tasks by just watching humans, and learn how to adapt to their interaction partners.  I will conclude by highlighting the challenges and open problems.

14:20 - 14:55    Dr. Xin Zhang (South China University of Technology)

Path Signature: A Time Series Analysis Approach and its Various Applications

This presentation explores the use of Path Signature (PS) for time series analysis, focusing on its applications in gesture recognition and predicting infant cognitive scores. It highlights the effectiveness of PS features in overcoming challenges like temporal variation and high-dimensional data. The gesture recognition method integrates PS with a Temporal Transformer Module, significantly improving accuracy. In predicting cognitive scores, PS features help in understanding brain region connectivity and developmental trajectories, leading to more accurate predictions. Moreover, how to organize/cluster the data to form meaningful paths is very crucial. Recent works prove its importance in both above applications. The findings demonstrate the versatility and efficacy of PS in diverse and complex data analysis tasks.

14:55 - 15:30    Coffee Break

15:30 - 16:05    Dr. Michael Wray (Bristol University)

Unlocking the Temporal Dimension from the Egocentric Perspective

In this talk I will explore the use of context and the temporal dimension within egocentric videos. With devices such as the Meta Ray Ban Glasses and the Apple Vision Pro, wearable devices that record videos of the wearer's day-to-day are becoming commonplace. I will highlight the difficulty of searching through long and untrimmed egocentric videos and why it is important. Next,  I will present how context of what comes before and after can help with user search. Finally, I will present how the temporal consistency in videos can be an important key for unlocking image editing/alteration.

16:05 - 16:40    Dr. Kevin Schlegel (Vicon Motion Systems)

How to measure progress: Know your data and metrics!

We train a model, compute some metrics and use those to decide if it improves on what we had previously. But does this really mean progress? And what does it tell us about using the model in a real product? In this talk I will discuss aspects of how your data influences how meaningful your metrics are, hidden issues in widely used metrics, and gaps between metrics used in publications and what is required for use of a model in the real world.

16:40- 17:00    General Discussion

17:00    Drink & Food