Statistical Science Seminars

Usual time: Thursdays 1400 - 1500 

Location: Room 102, Department of Statistical Science, 1-19 Torrington Place (1st floor).

Some seminars are held at different locations and at different times.  Please click on the abstract for further details.

31 August 2017 (Galton Lecture Theatre, 1-19 Torrington Place): Dr. Bob Durrant (University of Waikato)

Random Projections for Dimensionality Reduction

Linear dimensionality reduction is a key tool in the statistician's toolbox, used variously to make models simpler and more interpretable, to deal with cases when n<p (e.g. to enable model identifiability), or to reduce compute time or memory requirements for large-scale (high-dimensional, large p) problems. In recent years, /random/ projection ('RP'), that is projecting a dataset on to a k-dimensional subspace ('k-flat') chosen uniformly at random from all such k-flats, has become a workhorse approach in the machine learning and data-mining fields, but it is still relatively unknown in other circles. In this talk I will review an elementary proof of the Johnson-Lindenstrauss lemma which, perhaps rather surprisingly, shows that (with high probability) RP approximately preserves the Euclidean geometry of projected data. This result has provided some theoretical grounds for using RP in a range of applications. I will also give a simple - but novel - extension which shows that for data satisfying a mild regularity condition simply sampling the features does nearly as well as RP at geometry preservation, while at the same time bringing a substantial speed-up in execution. Finally, I will briefly discuss some refinements of this final approach and present some preliminary experimental findings combining this with a pre-trained "deep" neural network on ImageNet data.

21 September 2017: Dr. Andrew Titman (Lancaster University)

TBC

05 October 2017: Prof. Aapo Hyvarinen (University College London)

Nonlinear ICA using temporal structure: a principled framework for unsupervised deep learning

Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, e.g. with independent component analysis (ICA) and sparse coding. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Here, we show that this problem can be solved by using temporal structure. We formulate two generative models in which the data is an arbitrary but invertible nonlinear transformation of time series (components) which are statistically independent of each other. Drawing from the theory of linear ICA, we formulate two distinct classes of temporal structure of the components which enable identification, i.e. recovery of the original independent components. We show that in both cases, the actual learning can be performed by ordinary neural network training where only the input is defined in an unconventional manner, making software implementations trivial. We can rigorously prove that after such training, the units in the last hidden layer will give the original independent components. [With Hiroshi Morioka, published at NIPS2016 and AISTATS2017.]

12 October 2017: Prof. Stein-Erik Fleten (Norwegian University of Science and Technology)

Structural Estimation of Switching Costs for Peaking Power Plants

We use structural estimation to determine the one-time costs associated with shutting down, restarting, and abandoning peak power plants in the United States. The sample period covers 2001-2009. Switching costs are difficult to determine in practice and can vary substantially from plant to plant. The approach combines a nonparametric regression for capturing transitions in the exogenous state variable with a one-step nonlinear optimization for structural estimation. The data are well-suited to test the new method because the state variable is not described by any known stochastic process. From our estimates of switching (and maintenance) costs we can infer the costs which would be avoided if a peaking plant were taken out of service for a year. This so-called avoidable cost plays an important role is electricity capacity markets such as the Reliability Pricing Mechanism in PJM. Our avoidable cost estimates are less than the default Avoidable Cost Rate (ACR) in PJM.

19 October 2017: Prof. Bianca De Stavola (University College London)

TBC

23 November 2017: Dr. Mark Brewer (Biomathematics & Statistics Scotland)

TBC


Affiliated Seminars