XClose

Gatsby Computational Neuroscience Unit

Home
Menu

News and Events

See the latest news and events from the Gatsby Unit.

LATEST NEWS

Gatsby papers accepted into NeurIPS 2020

7th October 2020

UCL joins ELLIS Network

14th September 2020

 


EVENTS

**Please note we currently are not holding any external seminars in person. Our external seminar series will resume online in October 2020**

Events are subject to change so for information about external seminars, including speaker updates, please contact Barry Fong, b.fong@ucl.ac.uk.

External seminars
25th November 2020Le Song and Xinshi chen
(Via Zoom)
Understanding Deep Architectures with Reasoning Layer
11th November 2020

Josh McDermott
(Via Zoom)
Successes and Failures of Neural Network Models of the Auditory System

4th November 2020

Alessandro Rudi
(Via Zoom)
Non-parametric Models for Non-negative Functions

14th October 2020 (3pm)

John Murray
(Via Zoom)
Modeling large-scale dynamics of human cortex

16th March 2020 (12.00 - 1pm)Robert Guangyu Yang
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG (12-1pm)
Understanding brains by building artificial networks
12th March 2020Emtiyaz Khan
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
TBC
9th March 2020 (12.00 - 1pm)Eric Nalisnick
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG (12-1pm)
Building and Critiquing Models for Probabilistic Deep Learning
5th March 2020 (2pm)Ahmed El Alaoui
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG (2pm)
Reconstruction problems on amenable graphs
4th March 2020Arnak Dalalyan
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
All-In-One Robust Estimator of the Gaussian Mean (arXiv:2002.01432)
24th February 2020 (12 - 1pm)Yixin Wang
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG (12-1pm)
The Blessings of Multiple Causes
21st February 2020 (12 - 1pm)Yingzhen Li
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG (12-1pm)
Deep probabilistic modelling for reliable machine learning systems
26th February 2020 **Cancelled**Mihaela van der Schaar
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
**Cancelled**TBC
22nd January 2020Michael Bronstein
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
Deep learning on graphs and manifolds: going beyond Euclidean data
15th January 2020 (Talk 2) **4pm**

Wei Ji Ma (Talk 2)
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG **4pm**
Human planning in large state spaces

15th January 2020 (Talk 1) **1pm**Wei Ji Ma (Talk 1)
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG **1pm**
Growing up in Science: Wei Ji Ma's unofficial story
8th January 2020Chand Chandrasekaran
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
Single Trial Neural Circuit Dynamics Underlying Perceptual Decision-Making
18th December 2019Irina Gaynanova
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
Simultaneous Non-Gaussian Component Analysis (SING) for Data Integration in Neuroimaging
11th December 2019David Freedman
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
Neural Circuits of Cognition in Artificial and Biological Neural Networks
27th November 2019Matthew Chalk
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
Inferring the function performed by a recurrent neural network
20th November 2019Benjamin Guedj
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
A walkthrough of advanced PAC-Bayes results
13th November 2019Omri Barak
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
Structure in the randomness of trained recurrent neural networks
23rd October 2019

Rajen Shah
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
Low-priced lunch in conditional independence testing

2nd October 2019Christian Walder
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
New Tricks for Estimating Gradients of Expectations
24th September 2019 (Tuesday)Gergely Neu
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
A unified view of entropy-regularized Markov decision processes
19th September 2019 (Thursday)Emtiyaz Khan
Ground Floor Seminar Room, 25 Howland Street, London, W1T 4JG 4pm
Learning-Algorithms from Bayesian Principle
Upcoming workshops/events

There are no upcoming events 

Past workshops/events

Recent developments on kernel methods

26th - 27th of September 2019

Gatsby 21st Birthday

11th-13th July 2019

External Seminar Talk Details

25th November 2020 Le Song and Xinshi Chen

Title:
Understanding Deep Architectures with Reasoning Layer

Abstract:
Recently, there has been a surge of interest in combining deep learning models with reasoning in order to handle more sophisticated learning tasks. In many cases, a reasoning task can be solved by an iterative algorithm. This algorithm is often unrolled, and used as a specialized layer in the deep architecture, which can be trained end-to-end with other neural components. Although such hybrid deep architectures have led to many empirical successes, the theoretical foundation of such architectures, especially the interplay between algorithm layers and other neural layers, remains largely unexplored. In this paper, we take an initial step towards an understanding of such hybrid deep architectures by showing that properties of the algorithm layers, such as convergence, stability, and sensitivity, are intimately related to the approximation and generalization abilities of the end-to-end model. Furthermore, our analysis matches closely our experimental observations under various conditions, suggesting that our theory can provide useful guidelines for designing deep architectures with reasoning layers.

Bio:
Le Song is an Associate Professor in the College of Computing and an Associate Director of the Center for Machine Learning, Georgia Institute of Technology. His principal research direction is machine learning, especially nonlinear models, such as kernel methods and deep learning, and probabilistic graphical models. Before he joined the Georgia Institute of Technology in 2011, he was a postdoc in the Department of Machine Learning, Carnegie Mellon University, and a research scientist at Google. He is also the recipient of the NSF CAREER Award’14, and many best paper awards, including the NIPS’17 Materials Science Workshop Best Paper Award, the Recsys’16 Deep Learning Workshop Best Paper Award, AISTATS'16 Best Student Paper Award, IPDPS'15 Best Paper Award, NIPS’13 Outstanding Paper Award, and ICML’10 Best Paper Award. He has served as the area chair or senior program committee for many leading machine learning and AI conferences such as ICML, NIPS, AISTATS, AAAI, and IJCAI, and the action editor for JMLR and IEEE TPAMI.
And
Xinshi Chen is a 4th year PhD student at Georgia Tech majoring in Machine Learning. She received her bachelor's and M.Phil. degree in Mathematics at the Chinese University of Hong Kong. Her current research focuses on bridging the connection between deep learning and traditional algorithms. She also works on graphical models, structure prediction, and applications in computational biology and recommendation system. Xinshi has spent time at Oak Ridge National Laboratory, Ant Financial, and Facebook AI as a research intern. Her research is generously supported by a Google PhD Fellowship in Machine Learning. Her homepage: http://xinshi-chen.com/

11th November 2020 Josh McDermott

Title:
Successes and Failures of Neural Network Models of the Auditory System

Abstract:
Humans derive an enormous amount of information about the world from sound. This talk will describe our recent efforts to leverage contemporary neural networks to build models of these abilities and their instantiation in the brain. Such models have enabled a qualitative step forward in our ability to account for real-world auditory behavior and illuminate function within auditory cortex. But they also exhibit substantial discrepancies with human perceptual systems that we are currently trying to understand and eliminate.

Bio:
Josh McDermott studies sound and hearing in the Department of Brain and Cognitive Sciences at MIT, where he is an Associate Professor and heads the Laboratory for Computational Audition. His research addresses human and machine audition using tools from experimental psychology, engineering, and neuroscience. McDermott obtained a BA in Brain and Cognitive Science from Harvard, an MPhil in Computational Neuroscience from University College London, a PhD in Brain and Cognitive Science from MIT, and postdoctoral training in psychoacoustics at the University of Minnesota and in computational neuroscience at NYU. He is the recipient of a James S. McDonnell Foundation Scholar Award, an NSF CAREER Award, a Troland Research Award, and the BCS Award For Excellence in Undergraduate Advising.

4th November 2020 Alessandro Rudi


Title:
Non-parametric Models for Non-negative Functions

Abstract:
Linear models have shown great effectiveness and flexibility in many fields such as machine learning, signal processing and statistics. They can represent rich spaces of functions while preserving the convexity of the optimization problems where they are used, and are simple to evaluate, differentiate and integrate. However, for modeling non-negative functions, which are crucial for unsupervised learning, density estimation, or non-parametric Bayesian methods, linear models are not applicable directly. Moreover, current state-of-the-art models like generalized linear models either lead to non-convex optimization problems, or cannot be easily integrated. In this paper we provide the first model for non-negative functions which benefits from the same good properties of linear models. In particular, we prove that it admits a representer theorem and provide an efficient dual formulation for convex problems. We study its representation power, showing that the resulting space of functions is strictly richer than that of generalized linear models. Finally we extend the model and the theoretical results to functions with outputs in convex cones. The paper is complemented by an experimental evaluation of the model showing its effectiveness in terms of formulation, algorithmic derivation and practical results on the problems of density estimation, regression with heteroscedastic errors, and multiple quantile regression.

Bio:
Alessandro Rudi is Researcher at INRIA and Ecole Normale Superieure, Paris from 2017. He received his PhD in 2014 from the University of Genova, after being a visiting student at the Center for Biological and Computational Learning at Massachusetts Institute of Technology.
In 2020 he has been awarded an ERC Starting Grant with the goal of making machine learning a reliable and effective tool for science and engineering.
https://www.di.ens.fr/~rudi/

14th October 2020 (3pm) John Murray

Title:
Modeling large-scale dynamics of human cortex

Abstract:
The spatiotemporal dynamics of cortical networks is shaped by long-range interactions and local circuit physiology. In this talk will present a framework for how computational modeling can integrate multiple modalities of brain imaging data – including noninvasive neuroimaging, anatomy, and transcriptomics – to inform the large-scale organization of human cortex. A key theme that has emerged is the importance of gradients of local circuit properties, e.g. reflecting microcircuit specialization along the cortical hierarchy, and their interplay with long-range connectivity in shaping functional activity.

Bio:
John D. Murray is an Assistant Professor of Psychiatry, Neuroscience, and Physics at Yale University. He completed his undergraduate and graduate education at Yale (B.S. in Physics and Mathematics; Ph.D. in Physics) with thesis research advisor Xiao-Jing Wang in computational neuroscience. Following a postdoctoral appointment at New York University, he joined the faculty at Yale in 2015. He directs a computational neuroscience research group with primary interests in neural circuit dynamics and cognitive computations.
https://medicine.yale.edu/lab/murray/