UCL Psychology and Language Sciences


Our project so far: Measuring and Evaluating Impact Thresholds (2023/24)

Background & Aim 

Last year, we examined how weather scientists determine the overall severity of a weather event from numerical impact information and found a compensatory strategy. However, we also found considerable heterogeneity between them in their weighted importance of six impacts for overall severity and in their thresholds for assigning particular impactcategories. So this year’s project aimed to reduce heterogeneity. 


  • A between-participant design was used to examine the effect of Label (providing standard threshold classification labels next to numerical impact values) on reducing heterogeneity. So half of weather scientists received label information and half did not.

  • Label information was derived from 9 online focus discussions conducted with forecasters and disaster management officers from four countries. 

Online focus discussions

Key findings

  • Analyses from more than 220 weather forecasters from 4 Southeast Asia replicated last year’s finding about the utilization of a compensatory strategy.  
  • Heterogeneity between weather scientists’ weighting of the importance of six impacts was significantly reduced for those participants who received labels.


Label provide an effective way to reduce inter-forecaster heterogeneity. Operational forecasters should engage in collaborative discussions to ensure the consistency of warnings issued across time. 

Standard deviations of random-effect terms in No-label and label conditions:

Table displaying standard deviations of random-effect terms in No-label and Label conditions

Back to our current project