TimeZone (
America/Los Angeles)
Provable Bounds for Machine Learning Monday, December 2, 2013 1:30:00 PM PST - 2:30:00 PM PST
Many tasks in machine learning (especially unsupervised learning) are provably intractable: NP-complete or worse. Researchers have developed heuristic algorithms to try to solve these tasks in practice. In most cases, these algorithms are heuristics with no provable guarantees on their running time or on the quality of solutions they return. Can we change this state of affairs? This talk will suggest that the answer is yes, and describe some of our recent work. (a) A new algorithm for learning topic models that provably works under some reasonable assumptions and in practice is up to 50 times faster than existing software like Mallet. (ICML 13) (b) Provable new algorithm with provable guarantees that learns a class of deep nets. We rely on the generative view of deep nets implicit in the works of Hinton and others. Our generative model is an n-node multilayer neural net that has degree at most nγ for some γ<1 and each edge has a random edge weight in [-1,1]. Our algorithm learns almost all networks in this class with polynomial running time. We also show that each layer of our randomly generated neural net is a denoising autoencoder (a central object in deep learning). Sponsoring Department: Computer Science and Engineering (CSE) www.cse.umich.edu Lecture Series: CSE Distinguished Lecture Series
Professor Sanjeev Arora Sanjeev Arora is a Charles C. Fitzmorris Professor of Computer Science at Princeton University. If you've never used Adobe Connect, get a quick overview: http://www.adobe.com/products/adobeconnect.html |