XClose

Institute for Mathematical and Statistical Sciences

Home
Menu

IMSS Annual Lecture Workshop: Evaluating Forecasts and Training Forecast Models

09 June 2025, 2:00 pm–5:00 pm

A photograph of the UCL portico with leaves and foliage to the foreground

Prof Thordis Thorarinsdottir (University of Oslo) gives a day-long workshop prior to the second UCL IMSS Annual Lecture.

Event Information

Open to

All

Availability

Yes

Cost

£25.00

Organiser

UCL IMSS

Location

Ground floor room G01
66-72 Gower St
London
WC1E 6EA

This workshop will be led by Prof Thordis Thorarinsdottir (University of Oslo), with full details below.

In forecast evaluation, a scoring rule provides an evaluation metric for probabilistic predictions or forecasts. That is, the scoring rule assigns a numerical score to the forecast by comparing the forecast and the realised observation. A scoring rule is called proper if it has the property that the score is optimised in expectation when the true data distribution is issued as the forecast. This property is considered a necessary condition for a decision-theoretically principled forecast evaluation. The class of proper scoring rules is large and diverse, so that proper scoring rules may evaluate different aspects of the forecast. For example, it has been stated that the aim of probabilistic forecasting should be to maximise the sharpness of the forecast subject to calibration. That is, there should be statistical compatibility between the predictive distribution and the observation while, at the same time, the forecast should provide as much information on the observation as possible. Different proper scoring rules are able to assess these properties to a different degree. On the other hand, if we want the forecast to possess certain properties that are well measured by a certain proper scoring rule, it seems natural to also consider this same scoring rule as a loss function when estimating the forecast. This line of thinking is increasingly being used in machine learning, in particular in applications such as meteorology where there is a long tradition for forecast evaluation with proper scoring rules. We will cover the foundations of forecast evaluation with a focus on proper scoring rules and related evaluation metrics, discuss some recent developments and, in particular, connections to the training of machine learning algorithms.

Tentative schedule (14:00-17:00)

14:00-14:45: A first lecture on scoring rules and forecast evaluation
14:45-15:00: Break 
15:00-16:00: A 1 hour practical where the attendees get to try out some examples of using scoring rules, where they need a laptop with R (and can bring their own data and predictions if they want) 
16:00-16:15: Break  
16:15-17:00: A second lecture on new developments and connections to training of machine learning algorithms

Ticketing & Pricing

Please book your place via the UCL Online Store using the 'Book Now' button on this page as places on the workshop are limited.

The workshop is free for UCL staff/students or £25.00 for external attendees.

Concessions

Free admission for UCL staff/students