XClose

Statistical Science

Home
Menu

Computational Statistics Reading Group

A reading group within UCL Statistical Science, available to all interested staff and research students. **Meetings currently taking place online until further notice due to coronavirus.**

  • Meets bi-weekly Thursdays 1-2pm in room 102, 1-19 Torrington Place
  • A forum to learn and share ideas, get feedback on things that you are working on, and practice presenting in an informal environment.
  • Content can be general areas of interest or specific research projects being developed, slides/chalk talks are both welcome
  • We target a minimum of 33% PhD students presenting, and aim for this number to be >50%
  • If you are interested in giving a talk please get in touch with any of the organisers!
23 Jan 2020: Intro to Bayesian variable selection (Xitong Liang & Jim Griffin)

An introduction to the variable selection problem and Bayesian approaches.  The session will cover model set up, some typical prior choices and commonly used MCMC methods to compute posterior quantities of interest.

Hoeting, J. A., Madigan, D., Raftery, A. E., & Volinsky, C. T. (1999). Bayesian model averaging: a tutorial. Statistical science, 382-401.

06 Feb 2020: MCMC for variable selection (Samuel Livingstone)

Covers some typical issues when performing Markov chain Monte Carlo for variable selection.  Then some recently developed approaches to designing intelligent proposals within Metropolis--Hastings algorithms that are applicable to discrete spaces, and some snapshots from the papers of Zanella (2019) and Power & Goldman (2019).

Zanella, G. (2019). Informed proposals for local MCMC in discrete spaces. Journal of the American Statistical Association, 1-27.
Power, S., & Goldman, J. V. (2019). Accelerated Sampling on Discrete Spaces with Non-Reversible Markov Processes. arXiv preprint arXiv:1912.04681.

20 Feb 2020: No meeting (reading week)

n/a

05 Mar 2020: Variable selection - beyond the linear model (Jim Griffin)

Some interesting extensions to the classical bayesian variable selection problem.

02 Apr 2020: Quasi Monte Carlo and Stein points (Francois-Xavier Briol)

A tutorial introduction to both the general idea of quasi-Monte Carlo and two recent contributions based on using Stein's method to devise elegant QMC methodology for dealing with intractable integrals.

Chen, W. Y., Mackey, L., Gorham, J., Briol, F. X., & Oates, C. (2018, July). Stein Points. In International Conference on Machine Learning (pp. 844-853).
Chen, W. Y., Barp, A., Briol, F. X., Gorham, J., Girolami, M., Mackey, L., & Oates, C. (2019, May). Stein Point Markov Chain Monte Carlo. In International Conference on Machine Learning (pp. 1011-1021).
16 Apr 2020: An Introduction to Bayesian Additive Regression Trees (Alberto Caron) 

A gentle introduction to BART - Bayesian additive regression trees.

Chipman, H. A., George, E. I., & McCulloch, R. E. (2010). BART: Bayesian additive regression trees. The Annals of Applied Statistics, 4(1), 266-298.

30 Apr 2020: Bayesian Nonparametrics for Causal Inference (Alberto Caron)

Please join the online Blackboard Collaborate session at
https://eu.bbcollab.com/guest/ADE59E7299D8075E31D0CED2ACB433D0

14 May 2020: Bayesian probabilistic numerical integration with tree-based models (Harrison Zhu, Imperial)

Bayesian quadrature (BQ) is a method for solving the numerical integration problems found at the heart of most machine learning and statistical methodologies. The standard approach to BQ is based on Gaussian process (GP) approximation of the integrand. As a result, the standard BQ approach is inherently limited to cases where GP approximations can be done in an efficient manner; often prohibiting high-dimensional or non-smooth target functions. We will introduce a new Bayesian numerical integration algorithm based on Bayesian Additive Regression Trees (BART) priors, which we call BART-Int. BART priors are easy to tune and automatically handle a mixture of discrete and continuous variables. We demonstrate that they also lend themselves naturally to a sequential design setting and that explicit convergence rates can be obtained in a variety of settings. The advantages and disadvantages of this new methodology are highlighted on different sets of benchmark tests functions and on a survey design problem. 

25 Jun 2020: Overview of Approximate Bayesian Computation (Jeremie Coullon)

with reference to this chapter: https://arxiv.org/abs/1802.09720

Please join the online Blackboard Collaborate session at:
https://eu.bbcollab.com/guest/80e49e33e28c472087031282d5a6820d

You can join up to 15 minutes before the session.