Brain Seminar

— 1:00pm

Location:
Virtual Presentation - ET - Remote Access - Zoom

Speaker:
MANEESH SAHANI , Director and Professor of Theoretical Neuroscience and Machine Learning, Gatsby Computational Neuroscience Unit, University College London
https://www.gatsby.ucl.ac.uk/~maneesh/

Recognition-parametrised learning: identifying latent probabilistic structure without generation.

Animals must learn how to recognise structure in the world from noisy and ambiguous sensory input without detailed supervision, but they have little need to generate samples replicating that sensory input.  Similar needs arise in many applications of machine learning.  However, the dominant approach to probabilistic unsupervised learning is based around explicitly generative models.  I will argue that this represents a considerable inefficiency, which can be overcome by using a new framework of distributional families we call recognition-parametrised models (RPMs).  I will describe the RPM and give examples of its use, including to causal analysis with latent confounders.  Finally, I will speculate about its relevance to biological learning.

Joint work with William Walker, Hugo Soulat and Marcel Nonnenmacher.

Maneesh Sahani is Director and Professor of Theoretical Neuroscience and Machine Learning at the Gatsby Computational Neuroscience Unit at University College London (UCL). Graduating with a B.S. in physics from Caltech, he stayed to earn his Ph.D. in the Computation and Neural Systems program, supervised by Richard Andersen and John Hopfield. After periods of postdoctoral work at the Gatsby Unit and the University of California, San Francisco, he returned to the faculty at Gatsby in 2004 and was elected to a personal chair at UCL in 2013. His work spans the interface of the fields of machine learning and neuroscience, with particular emphasis on the types of computation achieved within the sensory and motor cortical systems. He has helped to pioneer analytic methods which seek to characterize and visualize the dynamical computational processes that underlie the measured joint activity of populations of neurons. He has also worked on the link between the statistics of the environment and neural computation, machine-learning based signal processing, and neural implementations of Bayesian and approximate inference. 

Additional Information

Zoom Participation. See announcement.


Add event to Google
Add event to iCal