Skip to Content

Search: {{$root.lsaSearchQuery.q}}, Page {{$root.page}}

Department Seminar Series: Richard Sutton, Ph.D., Mind and Data: Learning to Predict Long-term Consequences Efficiently

Tuesday, January 21, 2014
12:00 AM
3725 BBB (Bob and Betty Beyster Building)

Abstract:  For some time now I have been exploring the idea that Artificial Intelligence can be viewed as a Big Data problem in the sense that it involves continually processing large amounts of sensorimotor data in real time, and that what is learned from the data is usefully characterized as predictions about future data. This perspective is appealing because it is reduces the abstract ideas of knowledge and truth to the clearer ideas of prediction and predictive accuracy, and because it enables learning from data without human intervention. Nevertheless, it is a radical idea, and it is not immediately clear how to make progress pursuing it. 

A good example of simple predictive knowledge is that people and other animals continually make and learn many predictions about their sensory input stream, a phenomena called “nexting” and “Pavlovian conditioning” by psychologists. In my laboratory we have recently built a robot capable of nexting: every tenth of a second it makes and learns 6000 long-term predictions about its sensors, each a function of 6000 sensory features. To do this is computationally challenging and taxes the abilities of modern laptop computers. I argue that it also strongly constrains the learning algorithms: linear computational complexity is critical for scaling to large numbers of features, and temporal-difference learning is critical to handling long-term predictions efficiently. This then is the strategy we pursue for making progress on the Big Data view of AI: we focus on the search for the few special algorithms that can meet the demanding computational constraints of learning long-term predictions efficiently.

Speaker: