Tomaso Poggio Why and When Can Deep – but Not Shallow – Networks Avoid the Curse of Dimensionality: Theoretical Results
Tomaso Poggio - MIT, USA
Session chair: Michael Lees

Abstract : In recent years, by exploiting machine learning — in which computers learn to perform tasks from sets of training examples — artificial-intelligence researchers have built impressive systems. Two of my former postdocs — Demis Hassabis and Amnon Shashua — are behind the two main success stories of AI so far: AlphaGo bettering the best human players at Go and Mobileye leading the whole automotive industry towards vision-based autonomous driving. There is, however, little in terms of a theory explaining why deep networks work so well. In this talk I will review an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. II will discuss implications of a few key theorems, together with open problems and conjectures. I will also sketch the vision of the NSF-funded, MIT-based Center for Brains, Minds and Machines which strives to make progress on the science of intelligence by combining machine learning and computer science with neuroscience and cognitive science.

http://cbcl.mit.edu/people/poggio/poggio-new.htm