XClose

UCL Centre for Artificial Intelligence

Home
Menu

AI Centre Seminar Series: Bridging Human and AI Beliefs

15 November 2023, 10:00 am–11:00 am

AI Centre Seminar Series: Towards Bridging the Gap Between Human and AI Beliefs

Towards Bridging the Gap Between Human and AI Beliefs: Gaussian Process Probes and Pre-trained Gaussian Processes

This event is free.

Event Information

Open to

All

Availability

Yes

Cost

Free

Organiser

Olive Ma

Location

Function Space
First Floor
90 High Holborn
London
WC1V 6LJ
United Kingdom

Artificial intelligence (AI) models and humans operate differently. Humans tend to think in abstract terms, while AI models rely on numerical computations. Despite this disparity, both humans and AI models can express rational beliefs in the form of uncertainty judgments, known as Bayesian beliefs. For the responsible and effective use of AI, it is crucial to understand and improve the alignment between the quantitative Bayesian beliefs of AI models and the abstract beliefs of humans.
 
This talk will explore two avenues of research aimed at bridging this gap: 

  • Gaussian Process Probes (GPP): We introduce GPP, a new interpretability tool that enables us to probe and measure beliefs about concepts represented by AI models. GPP can accurately quantify both epistemic uncertainty (how confident the probe is) and aleatory uncertainty (how fuzzy the concepts are to the model). Moreover, GPP can detect out-of-distribution data using those uncertainty measures. (https://arxiv.org/abs/2305.18213
  • Pre-trained Gaussian Processes: We propose transfer learning methods to pre-train Gaussian processes, aligning them more closely with expert beliefs about functions. We demonstrate that pre-trained Gaussian processes enhance the accuracy of posterior predictions and improve the performance of Bayesian optimization methods. (https://arxiv.org/abs/2109.08215, https://arxiv.org/abs/2309.16597)

 By understanding and aligning AI beliefs with human beliefs, we can pave the way for more responsible and effective use of AI models as they become increasingly prevalent in the real world.

About the Speaker

Zi Wang

Senior Research Scientist at Google DeepMind

Zi Wang is a senior research scientist at Google DeepMind in Cambridge, Massachusetts. Her research is motivated by intelligent decision making, and currently focuses on understanding and improving the alignment between subjective human judgements and Bayesian beliefs embedded in AI systems. Zi has also worked extensively on Bayesian optimization, active learning, Gaussian processes, and robot learning and planning. She completed her Ph.D. in Computer Science at MIT, advised by Leslie Kaelbling and Tomás Lozano-Pérez, and was awarded Rising Stars in EECS, RSS Pioneers and MIT Graduate Women of Excellence.