XClose

UCL Psychology and Language Sciences

Home
Menu

London Judgment and Decision Making seminars

Originally established at UCL in the early 1970’s as a weekly Cognition and Reasoning seminar, it later became an intercollegiate seminar on Language and Cognition in the early 1980’s. The name LJDM was finally coined in 1990, and the group has been running seminars under this name ever since, with lecturers and researchers in and around the UK meeting on a regular basis to discuss judgment and decision making, judgments of likelihood, reasoning, thinking, problem solving, forecasting, risk perception and communication, and other related topics.

Unless specified otherwise, all seminars take place on Wednesdays at 5pm, in Room 313 at the Psychology Department, University College London (on the corner of Bedford Way, Gordon Square and Torrington Place, London WC1H 0AP). Map.

To get updates on the current schedule and weekly reminders of the seminars, please subscribe to the Risk and Decision mailing list. All are welcome to attend.

The LJDM seminar series is supported by

University College London
City, University of London

If you would like to present your research to the group or to suggest a speaker, please contact the organizers:
- Sabine Topf (sabine.topf.14@ucl.ac.uk)
- Ayse Ozsari-Sahin (ayse.sahin.18@ucl.ac.uk)
- Cristina Leone (cristina.leone.19@ucl.ac.uk)


Academic Year 2020/21

Seminar Schedule Term 1

 

Please note that this term all seminars will be held online. To receive the link for joining the respective seminars, please subscribe to the mailing list or check back here on this site on the day.

 


07 Oct 2020 | Magda Osman | Queen Mary University

Guilting groups through nudge tactics (social comparisons) to behave cooperatively. Does it work?

Economic cooperation to tackle ongoing problems such as climate change requires social cooperation. This means going beyond spending effort for one’s own gain to spending effort for a collective goal (e.g. reducing carbon emissions through sourcing and implementing alternatives to fossil fuel). Psychology tells us that people look to others as a way to regulate how much effort to put into doing something, so much so, that simply knowing what others are doing can be a means of influencing one’s own effort exertion. For example, UK-listed companies are required to disclose their greenhouse gas emissions and account publicly for their contributions to climate change (2018). So, based on this, if social mechanisms are exploited (e.g., social comparison manipulation) in an adapted public goods game, can they reliably increase cooperation? In our version people spend effort (squeezing a handgrip device) for money that goes into a public pot, or to their personal pot. The experiment is conducted over two phases. Participants are required to return to repeat the main effort task after discussing with their group the feedback the have all received (either intended levels of cooperation [distribution of trials to the group pot], or actual cooperation [trials that entered into the group pot]). This project is funded by the Social Macroeconomic Hub of the Economic and Social Research Council’s (ESRC) Rebuilding Macroeconomics network.

 


14 Oct 2020 | Irene Scopelliti | City, University of London

Can Training Improve Decision Making?

Biases in judgment and decision making affect experts and novices alike, yet there is considerable variation in individual decision-making ability. To the extent that this variance reflects malleable differences, training could be an effective way to debias and improve human reasoning. I discuss the results of a program of research that includes laboratory, longitudinal, and field experiments in which one-shot debiasing training interventions are shown to substantively improve decision making. In these studies, different training interventions, including simple scripts, instructional videos, observation, and serious games, produced significant effects on measures of cognitive bias critical to intelligence analysis (i.e., anchoring, bias blind spot, confirmation bias, correspondence bias, representativeness, and social projection), on confirmatory hypothesis testing in a complex business decision, and on the frequency and quality of advice taking. The debiasing effects of training transferred across problems in different contexts and formats. These results provide new encouraging evidence that training can be an effective and scalable debiasing intervention to improve decision making.

 


21 Oct 2020 | Ido Erev | Technion Israel Institute of Technology | Click this link to register for the online seminar

Six Contradicting Deviations from Rational Choice, and the Impact of Experience

To help predict choice behavior, behavioral economics research tries to identify robust deviations from rational choice. Our analysis questions the value of this convention and proposes an alternative. First, we demonstrate that six well-known deviations from rationality are not robust; they can be reversed by small changes in the incentive structure. For example, behavior consistent with loss aversion becomes behavior consistent with overconfidence (the opposite bias) when changing the correlation between the outcomes. We present six pairs of contradicting deviations. Then, we highlight the potential of focusing on the impact of experience: A simple model assuming reliance on small samples of experiences captures all 12 contradicting deviations and provides useful ex-ante predictions. These predictable effects of experience on behavior can facilitate design of public policies.

 


28 Oct 2020 | No seminar

 


04 Nov 2020 | Simon McNair | The Behavioural Insights Team

The Behavioural Insights Team: Applying behavioural science to real-world policy issues

Over the past 10 years, the UK's Behavioural Insights Team (BIT) has pioneered the application of behavioural science to designing public policies, practices, and procedures. Originating as the UK government's "Nudge Unit", BIT's purpose was to design, test, and embed human-centred policy making in government; policies that are cognizant of, and that work with - not against - what psychology, economics, and behavioural science research tells us about how people make everyday choices and decisions. This talk will briefly introduce BIT, including who we are and what we do, and present findings from across a number of major field trials and experimental lab work that BIT has produced on topics ranging from increasing pension contributions; reducing medication prescription errors; increasing credit card repayments; and encouraging safer gambling.

 


11 Nov 2020 | READING WEEK - No Seminar

 


18 Nov 2020 | Lukasz Walasek | University of Warwick | Click this link to register for the online seminar

What can a coin flip tell us about loss aversion?

Imagine a coin toss lottery in which you can obtain £10 if the coin lands on heads but lose £10 if it lands on tails. Existing research suggests that most people do not want to participate in this lottery because they are loss averse. By varying the amounts at stake, it is possible to find a bet for which a person is indifferent – they are equally happy to accept the bet or reject it. This method is widely used in decision sciences to estimate people’s personal level of loss aversion. In this talk, I will summarize several lines of research showing that a) people’s responses on the accept-reject task are highly context sensitive, b) the loss aversion cannot be reliably estimated using this particular method, and c) unwillingness to accept 50:50 lotteries is partly driven by the pre-decisional bias towards rejection. These results carry consequences for how we define loss aversion, and what conclusions can we draw from studies of individual differences in asymmetric preferences for gains and losses.

A recording of this talk can be found here. The video will be available for a week after the talk.

 


25 Nov 2020 | Mark Noort | Leiden University | Click this link to register for the online seminar

Speaking-up to prevent harm: decision-making in times of danger

Safety silence is the act of people withholding safety concerns, instead of raising these. Speaking-up (or ‘safety voice’) is essential for effective decision-making in times of danger (e.g., for identifying leading indicators of incidents, increasing performance) and where people have remained silent this has led to major organisational failures (e.g., Challenger space shuttle, Deep Water Horizon, Mid-Staffordshire hospital trust). Research on safety silence in high-reliability industries (e.g., aviation, healthcare) has proposed at least 32 variables to reduce safety silence. For instance, inclusive leadership practices, flat hierarchies, favourable policies. However, despite contributing important lessons, the vast majority of studies have utilised methods and data that provide few insights on the extent to which people decide to raise safety concerns and prevent harm. In particular, whilst it has been assumed that speaking-up can mitigate the momentum of hazards towards harmful outcomes, evidence on the nature of the behaviour and the mechanisms that explain the behaviour during simulated and real accidents remains limited. Without addressing this, the scope for improving decision-making on safety remains limited. To address this gap, across several experimental studies (n > 1200) and an analysis of ‘black box’ data from 172 aviation accidents, we addressed methodological challenges for investigating the nature of safety silence and safety voice, and developed the Threat Mitigation Model of safety voice and silence. This model has important implications for policy: i) safety voice is a distinct concept that is highly ecological and situated, ii) the phenomenon is important for understanding how ineffective decision-making can contribute to accidents, and iii) the model underscores that safety concerns, safety voice and safety listening all contribute to effective decision-making in times of danger.

 


02 Dec 2020 | Michael Yeomans | Imperial College | Click this link to register for the online seminar

Conversational Receptiveness: Improving engagement with opposing views

We examine “conversational receptiveness” – the use of language to communicate one’s willingness to thoughtfully engage with opposing views. We develop an interpretable machine learning algorithm to identify the linguistic profile of receptiveness (Studies 1A-B). We then show that in contentious policy discussions, government executives who were rated as more receptive - according to our algorithm and their partners, but not their own self-evaluations - were considered better teammates, advisors, and workplace representatives (Study 2). We also show that conversational receptiveness is reciprocated in kind, and extend this result to two field settings where conflict is endemic to productivity. In discussion forums for online courses on political topics, receptive posts receive more receptive replies (Study 3A). Furthermore, wikipedia editors who are more receptive are less prone to receive personal attacks (Study 3B). We also find that a "receptiveness recipe" intervention, based on our algorithm, can improve writers' trust and persuasiveness (Study 4). Overall, we find that conversational receptiveness is reliably measurable, has meaningful relational consequences, and can be misunderstood by people in conflict.

 

 


09 Dec 2020 | Lara Kirfel | UCL | Click this link to register for the online seminar

Mind, Matter and Morals - What shapes our causal judgements about human agents

Our ability to identify causal relationships in the world is one of the most central cognitive abilities, enabling us to adapt to the world. Research has uncovered a variety of cues that people use to infer causal structure from statistical data or assess the quantitative strength of causal relationships. Human agents often act as causes, but the factors that are relevant for their causal assessment are complex. In this talk, I will focus on how we attribute causality to human agents. In particular, I will highlight the role mental states - what an agent thinks, believes or knows - and how they shape the way we attribute causality to an agent's action. In a series of experiments, I show that people attribute increased causality to "abnormal" or atypical actions, and that this causal preference is mediated by inferences about the agents' epistemic states. Drawing on the relevance of epistemic states for causal attributions, I will present four experiments testing people's causal judgments about agents who are ignorant about the consequences of their actions. Finally, I will present two studies investigating how our inferences from causal explanations are shaped by communication and the knowledge states of the speaker. I discuss the relevance of these findings for current frameworks of causation. 

 


16 Dec 2020 | Kathryn Francis | University of Bradford | Click this link to register for the online seminar

Does mode of presentation influence moral decision-making? Investigating moral responses in virtual reality, audio-visual, and text-based dilemmas

Moral psychologists have investigated moral decision-making using hypothetical vignettes adopted from philosophy. Typically, these trolley-type problems are presented via text and participants are asked whether the action described in the scenario is morally appropriate. To examine what individuals might actually do in these up-close and personal moral dilemmas, we’ve incorporated Virtual Reality (VR) simulations of trolley-type problems and examined the influence of audio- visual and haptic features on moral responses. Across several studies, we find that utilitarian decision-making (sacrificing one person in order to save many more) is higher in VR moral dilemmas compared to text-based dilemmas (e.g., Francis et al., 2016; 2017; Patil et al., 2017). To develop a clearer picture of how these modes of presentation influence moral decision-making, we examine responses to trolley-type problems that are presented in different formats. We find that moral responses in text-based dilemmas do not differ to decisions in simple visual dilemmas (Experiment 1), complex visual dilemmas with audio (Experiment 2) or to 2D video sequences (Experiment 3). These findings might suggest that features specific to VR prompt differences in moral responses or that VR enables us to measure the construct of moral action as opposed to moral judgment.