UCL Psychology and Language Sciences


London Judgement and Decision Making seminars


The LJDM seminar series is supported by

University College London

City University London

Originally established at UCL in the early 1970’s as a weekly Cognition and Reasoning seminar, it later became an intercollegiate seminar on Language and Cognition in the early 1980’s. 

The name LJDM was finally coined in 1990, and the group has been running seminars under this name ever since, with lecturers and researchers in and around the UK meeting on a regular basis to discuss judgment and decision making, judgments of likelihood, reasoning, thinking, problem solving, forecasting, risk perception and communication, and other related topics.

If you would like to present your research to the group or to suggest a speaker, please contact the organizers:

- Lara Kirfel (lara-christina.kirfel.15@ucl.ac.uk),
- Sabine Topf (sabine.topf.14@ucl.ac.uk), and
- Tamara Shengelia (tamara.shengelia.15@ucl.ac.uk)

Unless specified otherwise, all seminars take place on Wednesdays at 5pm, in Room 313 at the Psychology Department, University College London (on the corner of Bedford Way, Gordon Square and Torrington Place, London WC1H 0AP). Map.

To get updates on the current schedule and weekly reminders of the seminars, please subscribe to the Risk and Decision mailing list.

All are welcome to attend.

Term 3 Seminar Schedule

May – June 2019


Merim Bilalic  
University of Northumbria

Why Good Thoughts Block Better Ones

The Einstellung (mental set) effect occurs when the first idea that comes to mind, triggered by familiar features of a situation at hand, prevents a better solution being found. Here I first show that it works by influencing mechanisms that determine what information is attended to. I present a series of experiments where, having found one solution, expert chess players reported that they were looking for a better one. Their eye movements, however, showed that they continued to look at features of the problem related to the solution they had already thought of. The result is that alternatives to the first idea are ignored. I then demonstrate that the Einstellung mechanism, where the intake of new information is biased through the focus on the elements related to the initial idea, share similarities with a range of seemingly unrelated phenomena. Those phenomena range from cognitive biases to phenomena in problem solving and reasoning, to perceptual errors and failures in memory. I propose that the interplay between memory, attention and perception at the core of the Einstellung mechanism is the building block of the human cognition. Paradoxically, the very same mechanism that makes people highly efficient is also the source of their biases both in everyday and expert thought.


Eoin Travers   
UCL Institute of Cognitive Neuroscience

Evidence, Uncertainty, and Voluntary Action

Actions come in many varieties. We can respond reflexively to stimuli, or decide to act after evaluating some evidence, or even initiate actions of our own volition, in the absence of a stimulus. The standard neurological view is that there are two separate pathways for action: a lateral pathway for stimulus-triggered actions, and a medial one for self-triggered “voluntary” actions. Meanwhile, in cognitive science, actions are the final output of a process that includes inference, uncertainty, value, and decision making.

In this talk, I present experimental, theoretical, and computational work seeking to integrate these two approaches to action. In particular, I focus on the Readiness Potential, an EEG component traditionally thought to reflect preparatory activation of the voluntary action pathway. Recent work suggests it may actually reflect an evidence accumulation process. This insight provides an opportunity to bridge the gap between voluntary action and the broader fields of decision making, inference, exploration, and exploitation.

Yasmina Okan
University of Leeds

Communicating health risks with graphs: Effects on understanding and decision making

Effective risk communication is vital for improving public understanding of threats and for promoting informed decision making about potential risk-reduction actions. Graphical displays can help to overcome widespread difficulties in risk understanding, reduce common judgment biases, and promote risk-avoidant behaviors. Accordingly, graphs are increasingly used and recommended to communicate risks. The impact of graphs, however, is largely determined by specific design features, and inadequately designed graphs can be misleading. I will present findings of a series of experiments where we examined the impact of different graphical design features on health risk understanding, risk perceptions, and decision making. I will discuss implications for refining theories of graphical risk communication and outline prescriptive implications for graph-based decision support. I will also discuss ongoing work examining how risk information is presented in public communications about cervical cancer screening, and whether such communications can be improved using graphs.

Anya Skatova
University of Bristol

The role of individual differences and emotions in cooperation

Sustained cooperative social interactions are key to successful outcomes in many real-world contexts (e.g., climate change and energy conservation). I will report on results of a series of lab studies exploring: (1) the self-regulatory roles of emotions, specifically, anger and guilt, on cooperation (2) individual differences in social preferences and personality traits that are associated with cooperation in social dilemmas.


Daniel Effron

London Business School

The Moral Psychology of Misinformation

According to some pundits, we live in a post-truth world, surrounded by "fake news," "alternative facts," conspiracy theories, and falsehoods spread by political leaders. One risk of such misinformation is that people will believe it. This talk examines a different risk: that people will sometimes find misinformation morally permissible even though they do not believe it. These moral judgments matter: When people judge misinformation as morally acceptable, they should be less inclined to take action to stop it, less likely to hold its purveyors accountable, and more likely to spread it themselves. The talk presents several experiments suggesting that people judge blatantly false misinformation as less morally problematic to spread when they imagine how it could have been true, how it might become true, or when they have simply encountered the misinformation before. Implications for stopping the spread of misinformation will be discussed.


Eleonore Batteux    
University of Nottingham

Risk preferences in surrogate decision making

There is an extensive literature investigating how we make decisions for ourselves, but how we do so for others is less well understood. Several theories have suggested that decisions for others are different in degree from decisions for the self, which leads us to make more optimal decisions for others. However, these theories and their associated methodologies are limited in their ability to account for more life-changing decisions, where a range of factors come into play. I suggest that a more sophisticated theoretical and methodological approach is required to understand the complexity of surrogate decision-making. Throughout this talk, I will discuss evidence from the financial and medical domains to address self-other differences in risk-taking. I will then explore the process of making end-of-life decisions for others in further depth to illustrate how surrogate decision-making manifests itself in the real world.