XClose

UCL Psychology and Language Sciences

Home
Menu

London Judgement and Decision Making seminars

The LJDM seminar series is supported by

University College London

City University London

Originally established at UCL in the early 1970’s as a weekly Cognition and Reasoning seminar, it later became an intercollegiate seminar on Language and Cognition in the early 1980’s. 

The name LJDM was finally coined in 1990, and the group has been running seminars under this name ever since, with lecturers and researchers in and around the UK meeting on a regular basis to discuss judgment and decision making, judgments of likelihood, reasoning, thinking, problem solving, forecasting, risk perception and communication, and other related topics.

If you would like to present your research to the group or to suggest a speaker, please contact the organizers:

- Eric Schulz (eric.schulz.13@ucl.ac.uk)
- Lara Kirfel (lara-christina.kirfel.15@ucl.ac.uk)
- Tamara Shengelia (tamara.shengelia.15@ucl.ac.uk)

Unless specified otherwise, all seminars take place on Wednesdays at 5pm, in Room 313 at the Psychology Department, University College London (on the corner of Bedford Way, Gordon Square and Torrington Place, London WC1H 0AP). Map.

To get updates on the current schedule and weekly reminders of the seminars, please subscribe to the Risk and Decision mailing list.

All are welcome to attend.

Term 2 Seminar Schedule

January – April 2017

18.01.2017

Incorporating conflicting descriptions into decisions from experience

Leonardo Weiss-Cohen

UCL, UK

Most decision making research so far has looked either exclusively at decisions from experience or decisions from description, or compared the behavioural results from the two different paradigms performed separately. Very limited research has looked at decisions that rely on both descriptions and experience simultaneously, which we believe is a very important part of daily decision making, where we can rely on both descriptions and past experiences to make our decisions. A series of experiments has shown that while both sources of informations are taken into account, more relevance is given to experience, while descriptions appear to be discounted, but not completely ignored. Other influential factors are the plausibility of the descriptions, with plausible descriptions getting higher weights than implausible ones; and the complexity of the task, with descriptions having higher impact in medium-complexity tasks. This research is important for the understanding of the effectiveness of warning labels and signs, which can be seen as descriptions applied to our previously acquired experiences.

25.01.2017

Trials-with-fewer-errors: Feature-based learning and exploration

Hrvoje Stojic

UCL/UPF Barcelona, UK/Spain

Reinforcement learning algorithms have provided useful insights into human and animal decision making and learning. However, they perform poorly when faced with real world situations characterized by  multi-featured alternatives and contextual cues. In this paper, we propose an approximate Bayesian optimization framework for tackling such problems. The framework relies on similarity-based learning of  functional relationships between features and rewards, and choice rules that use uncertainty in balancing the exploration-exploitation trade-off. We tested our framework using a series of novel multi-armed bandit experiments in which alternative rewards are noisy functions of two features. The exploration behaviour of some participants showed signatures of Bayesian optimization, being guided by prior expectations along with the need for function learning, and taking uncertainty into account. However, a sizeable proportion of participants ignored the feature information; and barely any performed nearly as well as optimal Bayesian inference. We illustrate the fecundity of the paradigm and highlight several lines of future research.

01.02.2017            

The hippocampus as a predictive map

Kimberly Stachenfeld

Google Deepmind, UK

A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation provides a useful basis for estimating expected future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.

08.02.2017

What the Success of Brain Imaging Implies about the Neural Code

Olivia Guest

UCL, UK

The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers.  These results have implications for the nature of neural code and ventral stream, as well as what can be successfully investigated with fMRI.

22.02.2017

Distinct effects of interpersonal and outcome uncertainty on prosocial decision making

Andreas Kappes

University of Oxford, UK

Uncertainty about how our choices will affect others infuses social life, but how uncertainty affects prosocial behavior is not well understood. Past work suggests uncertainty can both increase and decrease prosocial behavior. Here, we distinguish between two different types of uncertainty that we predicted to have opposing effects on prosociality: uncertainty about whether decisions will lead to harmful outcomes (outcome uncertainty), and uncertainty about how much others will suffer as a result of those harmful outcomes (interpersonal uncertainty). We independently manipulated outcome and interpersonal uncertainty and found that outcome uncertainty decreased, while interpersonal uncertainty increased, prosocial behavior. We observed these opposing effects of uncertainty on prosocial behavior in both incentivized economic decisions to share money, and hypothetical health decisions involving infectious disease risk. Perceptions of social norms echoed patterns of choices, with outcome uncertainty decreasing and interpersonal uncertainty increasing the perceived social appropriateness of prosociality. Finally, we find evidence that the effect of interpersonal uncertainty on prosociality is driven by increased attention to others’ welfare. Our results suggest that drawing attention to others’ subjective experiences can increase prosocial behavior, and thereby offer policy implications for the effective communication of risk.

01.03.2016

Hypothesis Testing in Intelligence Analysis

Mandeep Dhami

Middlesex University, UK 

Intelligence analysis involves collating and processing relevant data, and interpreting the outputs in order to arrive at a judgment about a current or future situation or event. Often this requires analysts to assess evidence to test alternative accounts or hypotheses. Critics have argued that analysts suffer from confirmation bias. In an effort to overcome confirmation bias, Heuer (1999) developed a ‘structured analytic’ technique called the Analysis of Competing Hypothesis (ACH), which focuses on evidence inconsistent with a hypothesis, ignoring evidence consistent with it. The intelligence community trains analysts to use ACH in the belief that it will improve analysis. Problematically, there is a dearth of research on the effectiveness of ACH. I present two studies examining how people perform hypothesis testing tasks, including how they deal with evidence that is consistent or inconsistent with a hypothesis, and evidence that is more or less credible. Specifically, I compare people’s intuitive hypothesis testing strategies against ACH. The findings have implications for our understanding of how people evaluate hypotheses generally, as well as how the intelligence community can better train its analysts to test alternative hypotheses.

08.03.2016

Knowing that we know: Constructing a sense of confidence

Stephen Fleming

Wellcome Trust Centre for Neuroimaging, UCL, UK

Decisions are often made in the face of uncertainty and in the absence of immediate feedback. Accompanying our decisions is a sense of confidence in having made the right choice. Consider a doctor making a diagnosis. He or she may feel sure about making the right call and confidently prescribe a particular drug, or may be unsure, and instead opt to run further tests. With the passage of time confidence may even drop low enough to warrant a change of mind about the initial diagnosis. Despite near-universal agreement that decision confidence is a useful quantity to guide subsequent behaviour, there is currently little consensus on how it is constructed at a psychological, computational or neural level. Here I will highlight new results from our lab that illuminate how confidence is constructed in the human brain from both trait- and trial-level components, and identify a role for confidence in guiding future changes of mind. Together these findings reveal dissociable nodes in prefrontal cortex that may support metacognitive monitoring and control of simple decisions.

15.03.2017

Exploring the relationship between mathematical ability and domain general reasoning abilities in children and adults

Caren Frosch

University of Leicester, UK

A theoretical link between reasoning and mathematical ability has been supported by some recent empirical evidence. We argue that some of this evidence is indirect and measure selection may have influenced this relationship. We report three studies in which mathematical ability was measured using standardised fluency and calculation measures and reasoning ability was measured using an extended cognitive reflection test. In Study 1, children aged 9-11 years completed the measures. In Study 2, reasoning was also measured using a belief bias conditional reasoning task in a group of undergraduate students, and in Study 3, we included Ravens Progressive Matrices and a maths anxiety measure with a further sample of undergraduate students. Results from the three studies suggest that mathematical ability is predicted by performance on the cognitive reflection test but not conditional reasoning or Ravens Matrices performance when mathematical fluency and maths anxiety are taken into consideration. We discuss the implications of these findings for research on the link between mathematical ability and reasoning skills and consider the adequacy of the extended CRT as a measure of reasoning.

22.03.2017

Will AI revolutionise the decision sciences? A medical perspective.

John Fox

University of Oxford, UK

Headlines about medical errors and health service failures appear more and more frequently in the media. The general public and health professionals are now aware that medical practice is facing enormous challenges that affect the quality and safety of our clinical services. This has generated public and political pressure to look for solutions; the press often wants to know who to blame, while the politically minded see the need for greater privatisation, organisational change or bigger budgets. This is a global problem, not just one for the NHS.

As a cognitive scientist I see many of the underlying problems as arising from human cognitive limitations and how we bring our knowledge to bear in our reasoning, decision-making, planning etc. For example, decision-making is pivotal to everything we do individually, in groups and in organisations. Uncertainty and risk make decision making difficult – and uncertainty and risk pervade medicine so medicine is a fascinating model for cognitive research. Indeed there is now much talk about how “cognitive computing” and artificial intelligence will “revolutionise medicine”.

This talk will briefly overview some of my research on human expertise and the use of AI in medicine. I will argue that AI systems based on an understanding of human cognition and decision-making can improve quality, safety and efficiency of patient care, perhaps more than political and managerial interventions can. AI and its subfields, like knowledge engineering and machine learning, are showing us how machines can do human-level tasks well, cope with uncertainty, manage risks better, make more objective, evidence-based choices, plan and act more effectively in complex and rapidly evolving situations, while allowing people to retain control.

Will AI will revolutionise medicine? we don’t know yet. However, I am also interested in the question whether AI and its subsidiary fields, like knowledge representation and autonomous systems, offer significant new insights into human reasoning and decision-making. The possibility that the decision sciences might themselves be revolutionised by the new wave of AI seems ripe for discussion.