UCLIC Seminars

Barry Brown, Mobile Life, Stockholm University

Wednesday 26th November 3pm, Bentham House, B10 Seminar Room 1


On the iPhone: studying the co-present use of mobile devices


Over the last three years we have collected hundred of hours of recordings of mobile device use in diverse settings. We have recorded drivers using GPS to navigate, iPhone use recorded with wearable cameras, and remote recordings of mobile phone screens with ambient audio. These videos let us document how mobile devices have become threaded into diverse worlds of activity and how reliant we have become on our mobile devices.  In this talk I will focus on the interaction and talk around mobile devices, arguing that this can be as important as interaction with mobile devices. A web search might be shared with a friend, GPS's instructions can become the subject of a joke, or the composition of a text message discussed with a partner. Our videos let us see how conversations are influenced by mobile devices, through providing topics and interruptions, but also how device use is co-ordinared to fit with conversation, such as showing or narrating on phone activity.


Barry is a professor in Human Computer Interaction at the University of Stockholm, and is the Research Director of the Mobile Life VINN Excellence centre.  His recent work has focused on the sociology and design of leisure technologies - computer systems for leisure and pleasure. In over 100 publications he has discussed activities as diverse as games, tourism, museum visiting, the use of maps, television watching and sport spectating. He recently co-edited the Sage handbook of digital technology research, and his new book titled “Enjoying Machines” is out next year with MIT press.

Seminars Location:

UCLIC research seminars are on Wednesdays at 3pm during term-time. Please see notices for confirmation of the room number for each seminar.

If you would like to come and give a seminar talk, or would like further details on any seminars listed here, please contact Ana Tajadura-Jiménez or Sandy Gould.

Sam Gilbert, University College London – 19th November 2014


Strategic ‘offloading’ of delayed intentions into the external environment


In everyday life, we frequently use external tools such as diaries and smartphone reminders to help us to remember delayed intentions. In this way, our intentions are represented in distributed systems extending beyond our brains and bodies. However, surprisingly, there has been little empirical research into the causes and consequences of ‘intention offloading’. In this talk I will present a series of studies - using online web-based tasks and functional neuroimaging - to address these questions. These studies show that participants are highly sensitive to task demands when deciding whether or not to offload their intentions, and point to a critical influence of metacognitive confidence evaluations. Understanding these factors can lead to efficient design of artefacts to promote behavioural independence.


Sam Gilbert completed his PhD and postdoctoral research at the UCL Institute of Cognitive Neuroscience and is now a Royal Society University Research Fellow. He has also worked at New York University. His main research interests are the cognitive neuroscience of executive functions, prospective memory, and social cognition, and the functional architecture of human frontal lobes

Homepage: samgilbert.net

Carolyn Canfield, University of British Columbia – 12th November 2014


Experience based co-design for system transformation led by older persons moving in, out of and across Ontario’s healthcare system


Experience based co-design mobilizes the unique expertise of patients, their families and carers for healthcare improvement. The clients of care systems can best identify needs and direct development for successful solutions to close service gaps and improve outcomes that matter to patients.

Is it really possible to defer to those dependent on care to identify priorities for transformation, prepare specifications for technological aids and then co-produce innovative design, development, testing, implementation, evaluation and spread? If you’re not involving the end user in your technological solution, you are likely missing the problem.

Within the context of Canadian healthcare, with its similarities and contrasts to the NHS, Ontario's Northumberland PATH project is empowering 300 older persons with complex health conditions to reflect, redefine and reconfigure health services using technology to assist. Patient pursuit of wellbeing crosses system silos and targets user-identified drivers for an improved quality of life. I’ll introduce you to this innovative project’s origins, realization and achievement as a leading model of experience-based co-design that is rapidly implementing customer-led health delivery. Yes, there’s an app for that!


Carolyn Canfield is an independent citizen-patient collaborating with healthcare teams, patients, families and provider organizations to embed the patient voice in improvement processes. She champions patient expertise as the creativity driver for system transformation, aiming to fulfill the aspirations of clients and practitioners for care excellence. Her energetic full-time commitment arises from premature widowhood in 2008 following preventable harm. She has recently accepted an appointment as honorary lecturer in the Department of Family Practice, Faculty of Medicine, at the University of British Columbia. Carolyn has just been named Canada's inaugural Patient Safety Champion by the Canadian Patient Safety Institute and Accreditation Canada.

For more information ca.linkedin.com/pub/carolyn-canfield/23/a5/b54/

You can download the slides of this presentation here.

Neil Maiden, City University London – 29th October 2014


Computer Science Research to Support the Residential Care of Older People with Dementia


Caring for older people with dementia has become a strategic national challenge, yet it continues to be afforded low social status, and has high staff turnover and numbers of inexperienced carers. Increasing the quality of care given in such constraining environments has become a pressing issue, and digital technologies have capabilities to support the delivery of increased care quality at reasonable cost. However, there has been little computer science research dedicated to support delivery of this care. In particular, digital technologies can be applied to support person-centred care, a paradigm that seeks an individualised approach and recognises the uniqueness of each resident and understanding the world from the perspective of the person with dementia. This seminar will report recent research that has developed computerised support for two tasks to deliver person-centred care - creativity and reflective learning. It will report the development of new descriptive models of creative thinking and reflection in care that informed technology development, then describe three new software solutions to support creative thinking and reflection learning by carers for people with dementia: (i) technology-based serious games to train care staff in person-centred care techniques; (ii) digital life history apps that provide interactive support for reflective learning and creative thinking about daily resident care, and; (iii) a new mobile app to provide creative support for resolving challenging behaviours. Each app will be presented, and results from evaluations of each in different care settings will be summarised.


Neil Maiden is Professor of Systems Engineering at City University London. He is and has been a principal and co-investigator on numerous EPSRC- and EU-funded research projects with a total value of £30million. He has published over 160 peer-reviewed papers in academic journals, conferences and workshops proceedings. He was Program Chair for the 12th IEEE International Conference on Requirements Engineering in Kyoto in 2004, and was Editor of the IEEE Software’s Requirements column from 2005 to 2013. Since 2010 he has been leading computing research dedicated to support the residential care of older people with dementia.

Robert J.K. Jacob, Tufts University and UCL Interaction Centre – 22nd October 2014


Reality-Based Interaction, Next Generation User Interfaces, and Brain-Computer Interfaces


I will begin with the notion of Reality-Based Interaction (RBI) as a unifying concept that ties together a large subset of the emerging generation of new, non-WIMP user interfaces.  It attempts to connect current paths of research in HCI and to provide a framework that can be used to understand, compare, and relate these new developments. Viewing them through the lens of RBI can provide insights for designers and allow us to find gaps or opportunities for future development.  I will briefly discuss some past work in my research group on a variety of next generation interfaces such as tangible interfaces and eye movement-based interaction techniques. Then I will discuss our current work on brain-computer interfaces and the more general area of implicit interaction.


Robert Jacob is a Professor of Computer Science at Tufts University, where his research interests are new interaction modes and techniques and user interface software; his current work focuses on adaptive brain-computer interfaces. He is currently a visiting professor at the University College London Interaction Centre; he has also been visiting professor at the Universite Paris-Sud and at the MIT Media Laboratory.  Before coming to Tufts, he was in the Human-Computer Interaction Lab at the Naval Research Laboratory.  He received his Ph.D. from Johns Hopkins University, and he is a member of the editorial boards of Human-Computer Interaction and the International Journal of Human-Computer Studies and a founding member for ACM Transactions on Computer-Human Interaction.  He is Vice-President of ACM SIGCHI, and he has served as Papers Co-Chair of the CHI and UIST conferences, and Co-Chair of UIST and TEI.  He was elected to the ACM CHI Academy in 2007, an honorary group of the principal leaders of the field of HCI, whose efforts have shaped the discipline and industry, and have led research and innovation in human-computer interaction.

Rebecca Fiebrink, Goldsmiths, University of London – 15th October 2014


Interactive Machine Learning for End-User Systems Building in Music Composition & Performance


I build, study, teach about, and perform with new human-computer interfaces for real-time digital music performance. Much of my research concerns the use of supervised learning as a tool for musicians, artists, and composers to build digital musical instruments and other real-time interactive systems. Through the use of training data, these algorithms offer composers and instrument builders a means to specify the relationship between low-level, human-generated control signals (such as the outputs of gesturally-manipulated sensor interfaces, or audio captured by a microphone) and the desired computer response (such as a change in the parameters driving computer-generated audio). The task of creating an interactive system can therefore be formulated not as a task of writing and debugging code, but rather one of designing and revising a set of training examples that implicitly encode a target function, and of choosing and tuning an algorithm to learn that function.

In this talk, I will provide a brief introduction to interactive computer music and the use of supervised learning in this field. I will show a live musical demo of the software that I have created to enable non-computer-scientists to interactively apply standard supervised learning algorithms to music and other real-time problem domains. This software, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training data by real-time demonstration and the evaluation of trained models through hands-on application to real-time inputs.

Drawing on my work with users applying the Wekinator to real-world problems, I'll discuss how data-driven methods can enable more effective approaches to building interactive systems, through supporting rapid prototyping and an embodied approach to design, and through “training” users to become better machine learning practitioners. I'll also discuss some of the remaining challenges at the intersection of machine learning and human-computer interaction that must be addressed for end users to apply machine learning more efficiently and effectively, especially in interactive contexts.


Rebecca Fiebrink is a Lecturer in Graphics and Interaction at Goldsmiths, University of London. As both a computer scientist and a musician, she is interested in creating and studying new technologies for music composition and performance. Much of her current work focuses on applications of machine learning to music: for example, how can machine learning algorithms help people to create new digital musical instruments by supporting rapid prototyping and a more embodied approach to design? How can these algorithms support composers in creating real-time, interactive performances in which computers listen to or observe human performers, then respond in musically appropriate ways? She is interested both in how techniques from computer science can support new forms of music-making, and in how applications in music and other creative domains demand new computational techniques and bring new perspectives to how technology might be used and by whom.

Fiebrink is the developer of the Wekinator system for real-time interactive machine learning, and she frequently collaborates with composers and artists on digital media projects. She has worked extensively as a co-director, performer, and composer with the Princeton Laptop Orchestra, which performed at Carnegie Hall and has been featured in the New York Times, the Philadelphia Enquirer, and NPR's All Things Considered. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.“ Recently, Rebecca has enjoyed performing as the principal flutist in the Timmins Symphony Orchestra, as the keyboardist in the University of Washington computer science rock band “The Parody Bits,“ and as a laptopist in the Princeton-based digital music ensemble, Sideband. She holds a PhD in Computer Science from Princeton University and a Master's in Music Technology from McGill University

Katja Hofmann, Microsoft Research – 8th October 2014


Query Auto Completion (QAC) suggests possible queries to web search users from the moment they start entering a query.

This popular feature of web search engines is thought to reduce physical and cognitive effort when formulating a query. Perhaps surprisingly, despite QAC being widely used, users’ interactions with it are poorly understood. This paper begins to address this gap. We present the results of an in-depth user study of user interactions with QAC in web search. While study participants completed web search tasks, we recorded their interactions using eye-tracking and client-side logging. This allows us to provide a first look at how users interact with QAC. We specifically focus on the effects of QAC ranking, by controlling the quality of the ranking in a within-subject design.

We identify a strong position bias that is consistent across ranking conditions. Due to this strong position bias, ranking quality affects QAC usage. We also find an effect on task completion, in particular on the number of result pages visited. We show how these effects can be explained by a combination of searchers’ behavior patterns, namely monitoring or ignoring QAC, and searching for spelling support or complete queries to express a search intent. We conclude the paper with a discussion of the important implications of our findings for QAC evaluation.


Dr. Katja Hofmann is a postdoctoral researcher in the Machine Learning and Perception group at Microsoft Research Cambridge. Her research focuses on online evaluation and online learning, with the goal of developing interactive systems that learn directly from their users. This work is highly interdisciplinary, and brings together and expands insights from information retrieval, reinforcement learning, and human-computer interaction.

Duncan Brumby, UCL Interaction Centre – 1st October 2014


Improving the everyday interactions with your phone, and maybe medical devices too


Smartphones are a pretty big deal. Many of us now begin our day with our phone’s alarm clock. On the way to work we read email while listening to music. We use our phone to navigate novel cities. At the end of the day, we relax by queuing up content on our phone to watch on a connected television. All of this is done on a small computer, which weighs the same as 12 coins, and has a tiny 4-inch screen. Smartphones are a pretty big deal. In this talk, I will describe our recent work that has investigated how low-level design decisions influence the way that people use and interact with their phone. First, I will consider how the auto-locking feature on a phone can dissuade users from regularly interleaving attention between other ongoing activities (Brumby & Seyedi, mobileHCI 2012). Second, I will consider how current generation smartphones handle incoming-calls, and explore alternatives to the dominate full-screen notification model, which forcibly interrupts whatever activity the user was already engaged in (Böhmer et al., CHI 2014). Finally, I will discuss our recent work investigating how people search for content on a display (Brumby et al., CHI 2014).

About the speaker:

Duncan Brumby is a Senior Lecturer at University College London working in the UCL Interaction Centre. He received his doctorate in Psychology from Cardiff University in 2005, after which he was a post-doc in Computer Science at Drexel University, until joining UCL in 2007. Dr. Brumby’s research has been published in leading HCI and Cognitive Science outlets. His work on multitasking has received best paper nominations at CHI (2014, 2012, 2007), and his work on interactive search is one of the most-cited articles from the Human-Computer Interaction journal 2008-2010. To support his work, Dr. Brumby has attracted funding from the EPSRC. He is Associate Editor for the International Journal of Human-Computer Studies, is an Associate Chair for the ACM CHI conference (2012-2015) and ACM mobileHCI conference (2012-2013).

Page last modified on 24 jun 13 10:27 by Harry J Griffin