UCL Psychology and Language Sciences


London Judgement and Decision Making seminars

The LJDM seminar series is supported by

University College London

City University London

Originally established at UCL in the early 1970’s as a weekly Cognition and Reasoning seminar, it later became an intercollegiate seminar on Language and Cognition in the early 1980’s. 

The name LJDM was finally coined in 1990, and the group has been running seminars under this name ever since, with lecturers and researchers in and around the UK meeting on a regular basis to discuss judgment and decision making, judgments of likelihood, reasoning, thinking, problem solving, forecasting, risk perception and communication, and other related topics.

If you would like to present your research to the group or to suggest a speaker, please contact the organizers:

- Lara Kirfel (lara-christina.kirfel.15@ucl.ac.uk) and

- Tamara Shengelia (tamara.shengelia.15@ucl.ac.uk)

Unless specified otherwise, all seminars take place on Wednesdays at 5pm, in Room 313 at the Psychology Department, University College London (on the corner of Bedford Way, Gordon Square and Torrington Place, London WC1H 0AP). Map.

To get updates on the current schedule and weekly reminders of the seminars, please subscribe to the Risk and Decision mailing list.

All are welcome to attend.

Term 3 Seminar Schedule

May – June 2017


Sharma-Mittal and the mathematics of uncertainty

Jonathan Nelson

University of Surrey

Notions of entropy and uncertainty are fundamental to many domains, ranging from the philosophy of science to physics. One important application is to quantify the expected usefulness of possible experiments (or questions or tests). Many different entropy models could be used; different models do not in general lead to the same conclusions about which tests (or experiments) are most valuable. It is often unclear whether this is due to different theoretical and practical goals or are merely due to historical accident. We introduce a unified two-parameter family of entropy models that incorporates a great deal of entropies as special cases. This family of models offers insight into heretofore perplexing psychological results, and generates predictions for future research.


Counterfactual comparisons and the sense of agency

Eugenia Kulakova


Our sense of agency and responsibility for an action is closely associated with the counterfactual notion that we could have done otherwise. While the effects of such counterfactual evaluations can be studied in contexts that make action and outcome alternatives relatively explicit, it is also possible to manipulate the availability of counterfactual action alternatives on a more implicit level of motor preparation. Using a motor priming paradigm, we investigated the hypothesis that the extent of motor availability of a counterfactual (i.e. not performed) action influences how its (counterfactual) outcome affects subsequent judgement and behaviour.


Transformative Decisions

Kevin Reuter

University of Bern

Some of the most fundamental decisions we make in our lives – like becoming a parent – are transformative. Crucially, these decisions involve experiences that we cannot know in advance what they are like. According to Laurie Paul (2014), transformative decisions pose a major problem for us because they fall outside the realm of rationality. Her argument in favor of that conclusion rests on the premise that the expected experiential value, i.e. the value of what it is like to experience a certain outcome, plays the central role in transformative decisions. In this talk, I challenge that premise and hence the overall conclusion that transformative decisions are usually not rational. To do so, I present a series of empirical studies on transformative decisions and discuss a model in order to determine the likelihood that an agent will make a rational transformative decision. The results of the experiments paint a clear picture according to which people have a great chance that their transformative choices will be rational.


Better the devil you don’t know: preference for known or uncertain probabilities and the risk of failure

Peter Ayton

City, University of London

Co-authors:  Eugenio Alberdi, Lorenzo Strigini, & David Wright (Centre for Software Reliability)

Imagine being obliged to play Russian roulette – twice (if you are lucky enough to survive the first game). Each time you must spin the chambers of a six-chambered revolver before pulling the trigger. However you do have one choice: You can choose to either (a) use a revolver which contains only 2 bullets or (b) blindly pick one of two other revolvers: one revolver contains 3 bullets; the other just 1 bullet. Whichever particular gun you pick you must use every time you play. Surprisingly, option (b) offers a better chance of survival. We discuss a general theorem implying, with some specified caveats, that a system's probability of surviving repeated 'demands' improves as uncertainty concerning the probability of surviving one demand increases. Nonetheless our behavioural experiments confirm the counterintuitive nature of the Russian roulette and other kindred problems: most subjects prefer option (a). We discuss how uncertain probabilities reduce risks for repeated exposure, why people intuitively eschew them and some policy implications for safety regulation.


Deductive reasoning from uncertain premises

Nicole Cruz

Birkbeck, University of London

There has been a paradigm shift in the psychology of deductive reasoning, from a binary approach focussed on the truth or falsity of a statement, given the truth of some other statements, to a probabilistic approach focussed on one's degree of belief in a statement, given one's degrees of belief in some other statements. In the older, binary approach, the criterion for when an inference is correct was given by the rules of classical logic. In the probabilistic approach, the correctness of an inference is determined by the rules of probability theory. The probabilistic approach has sometimes carried over from the binary approach a contrast between "logic" on the one side and "belief" or "probability" on the other, and probabilistic accounts have sometimes been viewed as theories of belief based reasoning that is not logical. In this talk it is argued that the contrast between logic and belief is not necessary, because logic can itself be probabilistic. The two central logical concepts of consistency and validity can be generalised to coherence and probabilistic validity, respectively, making it possible for reasoning from uncertain premises to be deductive. A second section of the talk reviews experimental evidence on people's sensitivity to coherence and probabilistic validity in reasoning tasks.


Fast and accurate learning when making discrete numerical estimates

Adam Sanborn

University of Warwick

Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.