A A A

UCLIC Seminars

Robert J. K. Jacob – Tufts University and UCL Interaction Centre

Wednesday 22nd October 3pm, Malet Place Engineering Building 1.03

Title

Reality-Based Interaction, Next Generation User Interfaces, and Brain-Computer Interfaces

Abstract

I will begin with the notion of Reality-Based Interaction (RBI) as a unifying concept that ties together a large subset of the emerging generation of new, non-WIMP user interfaces.  It attempts to connect current paths of research in HCI and to provide a framework that can be used to understand, compare, and relate these new developments. Viewing them through the lens of RBI can provide insights for designers and allow us to find gaps or opportunities for future development.  I will briefly discuss some past work in my research group on a variety of next generation interfaces such as tangible interfaces and eye movement-based interaction techniques. Then I will discuss our current work on brain-computer interfaces and the more general area of implicit interaction.

Bio

Robert Jacob is a Professor of Computer Science at Tufts University, where his research interests are new interaction modes and techniques and user interface software; his current work focuses on adaptive brain-computer interfaces. He is currently a visiting professor at the University College London Interaction Centre; he has also been visiting professor at the Universite Paris-Sud and at the MIT Media Laboratory.  Before coming to Tufts, he was in the Human-Computer Interaction Lab at the Naval Research Laboratory.  He received his Ph.D. from Johns Hopkins University, and he is a member of the editorial boards of Human-Computer Interaction and the International Journal of Human-Computer Studies and a founding member for ACM Transactions on Computer-Human Interaction.  He is Vice-President of ACM SIGCHI, and he has served as Papers Co-Chair of the CHI and UIST conferences, and Co-Chair of UIST and TEI.  He was elected to the ACM CHI Academy in 2007, an honorary group of the principal leaders of the field of HCI, whose efforts have shaped the discipline and industry, and have led research and innovation in human-computer interaction.

Seminars Location:

UCLIC research seminars are on Wednesdays at 3pm during term-time. Please see notices for confirmation of the room number for each seminar.

If you would like to come and give a seminar talk, or would like further details on any seminars listed here, please contact Ana Tajadura-Jiménez or Sandy Gould.

Rebecca Fiebrink, Goldsmiths, University of London – 15th October 2014

Title

Interactive Machine Learning for End-User Systems Building in Music Composition & Performance

Abstract

I build, study, teach about, and perform with new human-computer interfaces for real-time digital music performance. Much of my research concerns the use of supervised learning as a tool for musicians, artists, and composers to build digital musical instruments and other real-time interactive systems. Through the use of training data, these algorithms offer composers and instrument builders a means to specify the relationship between low-level, human-generated control signals (such as the outputs of gesturally-manipulated sensor interfaces, or audio captured by a microphone) and the desired computer response (such as a change in the parameters driving computer-generated audio). The task of creating an interactive system can therefore be formulated not as a task of writing and debugging code, but rather one of designing and revising a set of training examples that implicitly encode a target function, and of choosing and tuning an algorithm to learn that function.

In this talk, I will provide a brief introduction to interactive computer music and the use of supervised learning in this field. I will show a live musical demo of the software that I have created to enable non-computer-scientists to interactively apply standard supervised learning algorithms to music and other real-time problem domains. This software, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training data by real-time demonstration and the evaluation of trained models through hands-on application to real-time inputs.

Drawing on my work with users applying the Wekinator to real-world problems, I'll discuss how data-driven methods can enable more effective approaches to building interactive systems, through supporting rapid prototyping and an embodied approach to design, and through “training” users to become better machine learning practitioners. I'll also discuss some of the remaining challenges at the intersection of machine learning and human-computer interaction that must be addressed for end users to apply machine learning more efficiently and effectively, especially in interactive contexts.

Bio

Rebecca Fiebrink is a Lecturer in Graphics and Interaction at Goldsmiths, University of London. As both a computer scientist and a musician, she is interested in creating and studying new technologies for music composition and performance. Much of her current work focuses on applications of machine learning to music: for example, how can machine learning algorithms help people to create new digital musical instruments by supporting rapid prototyping and a more embodied approach to design? How can these algorithms support composers in creating real-time, interactive performances in which computers listen to or observe human performers, then respond in musically appropriate ways? She is interested both in how techniques from computer science can support new forms of music-making, and in how applications in music and other creative domains demand new computational techniques and bring new perspectives to how technology might be used and by whom.

Fiebrink is the developer of the Wekinator system for real-time interactive machine learning, and she frequently collaborates with composers and artists on digital media projects. She has worked extensively as a co-director, performer, and composer with the Princeton Laptop Orchestra, which performed at Carnegie Hall and has been featured in the New York Times, the Philadelphia Enquirer, and NPR's All Things Considered. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.“ Recently, Rebecca has enjoyed performing as the principal flutist in the Timmins Symphony Orchestra, as the keyboardist in the University of Washington computer science rock band “The Parody Bits,“ and as a laptopist in the Princeton-based digital music ensemble, Sideband. She holds a PhD in Computer Science from Princeton University and a Master's in Music Technology from McGill University



Katja Hofmann, Microsoft Research – 8th October 2014

Abstract

Query Auto Completion (QAC) suggests possible queries to web search users from the moment they start entering a query.

This popular feature of web search engines is thought to reduce physical and cognitive effort when formulating a query. Perhaps surprisingly, despite QAC being widely used, users’ interactions with it are poorly understood. This paper begins to address this gap. We present the results of an in-depth user study of user interactions with QAC in web search. While study participants completed web search tasks, we recorded their interactions using eye-tracking and client-side logging. This allows us to provide a first look at how users interact with QAC. We specifically focus on the effects of QAC ranking, by controlling the quality of the ranking in a within-subject design.

We identify a strong position bias that is consistent across ranking conditions. Due to this strong position bias, ranking quality affects QAC usage. We also find an effect on task completion, in particular on the number of result pages visited. We show how these effects can be explained by a combination of searchers’ behavior patterns, namely monitoring or ignoring QAC, and searching for spelling support or complete queries to express a search intent. We conclude the paper with a discussion of the important implications of our findings for QAC evaluation.

Bio

Dr. Katja Hofmann is a postdoctoral researcher in the Machine Learning and Perception group at Microsoft Research Cambridge. Her research focuses on online evaluation and online learning, with the goal of developing interactive systems that learn directly from their users. This work is highly interdisciplinary, and brings together and expands insights from information retrieval, reinforcement learning, and human-computer interaction.

Duncan Brumby, UCL Interaction Centre – 1st October 2014

Title:

Improving the everyday interactions with your phone, and maybe medical devices too

Abstract:

Smartphones are a pretty big deal. Many of us now begin our day with our phone’s alarm clock. On the way to work we read email while listening to music. We use our phone to navigate novel cities. At the end of the day, we relax by queuing up content on our phone to watch on a connected television. All of this is done on a small computer, which weighs the same as 12 coins, and has a tiny 4-inch screen. Smartphones are a pretty big deal. In this talk, I will describe our recent work that has investigated how low-level design decisions influence the way that people use and interact with their phone. First, I will consider how the auto-locking feature on a phone can dissuade users from regularly interleaving attention between other ongoing activities (Brumby & Seyedi, mobileHCI 2012). Second, I will consider how current generation smartphones handle incoming-calls, and explore alternatives to the dominate full-screen notification model, which forcibly interrupts whatever activity the user was already engaged in (Böhmer et al., CHI 2014). Finally, I will discuss our recent work investigating how people search for content on a display (Brumby et al., CHI 2014).

About the speaker:

Duncan Brumby is a Senior Lecturer at University College London working in the UCL Interaction Centre. He received his doctorate in Psychology from Cardiff University in 2005, after which he was a post-doc in Computer Science at Drexel University, until joining UCL in 2007. Dr. Brumby’s research has been published in leading HCI and Cognitive Science outlets. His work on multitasking has received best paper nominations at CHI (2014, 2012, 2007), and his work on interactive search is one of the most-cited articles from the Human-Computer Interaction journal 2008-2010. To support his work, Dr. Brumby has attracted funding from the EPSRC. He is Associate Editor for the International Journal of Human-Computer Studies, is an Associate Chair for the ACM CHI conference (2012-2015) and ACM mobileHCI conference (2012-2013).

Page last modified on 24 jun 13 10:27 by Harry J Griffin