Computer Science Speaking Skills Talk

Tuesday, August 2, 2016 - 1:00pm to 2:00pm


Traffic21 Classroom 6501 Gates & Hillman Centers



A wide range of machine learning problems, including astronomical inference about galaxy clusters, natural image scene classification, parametric statistical inference, and predictions of public opinion, can be successfully tackled by learning a function on (samples from) distributions. One of the most effective class of techniques for doing so is kernel methods. Applying these methods to large datasets, though, can be computationally challenging: typical methods take between quadratic and cubic time and storage in the number of input samples. In this talk, we present methods for approximate embeddings into Euclidean spaces that approximate various classes of kernels between distributions, which allow for learning with time and space linear in the number of inputs. We first present an improved analysis of the workhorse tool in this area, random Fourier features la Rahimi and Recht: we show that of the two variants of this approach in common usage, one is strictly better. Then, after showing how to use these methods for learning on distributions via the maximum mean discrepancy, we give a new class of distribution embeddings that allows machine learning on large datasets using the total variation, Jensen-Shannon, and Hellinger distances, expanding the set of modeling options available in such problems. Theoretical and empirical results will be given throughout. Based on joint work with Junier Oliva, Barnabas Poczos, and Jeff Schneider. Presented in partial Fulfillment of the CSD Speaking Skills Requirement.

For More Information, Contact:


Speaking Skills