Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Affiliates 
 Visitors 
 Students 
 Research 
 ICAM-LANL 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Colloquia 
 Colloquia Archive 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 P/T Colloquia 
 Archive 
 Kac Lectures 
 Kac Fellows 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Colloquia 
 
 Jobs 
 Postdocs 
 CNLS Fellowship Application 
 Students 
 Student Program 
 Visitors 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Wednesday, September 18, 2019
2:00 PM - 3:00 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

Dynamic Programming with Sparse Codes: Investigating a New Computational Role for Sparse Representations of Natural Image Sequences

Peter Loxley
University of New England in Australia

Abstract: Dynamic programming (DP) is a general algorithmic approach used in optimal control and Markov decision processes that balances desire for low present costs with undesirability of high future costs when choosing a sequence of controls to apply over time. Recent interest in this field has grown since Google Deepmind's algorithms have beaten humans and world-champion programs in Atari games, and games such as chess, shogi, and go. But why do these algorithms work so well? In many image-based tasks, early-layer weights of trained deep neural networks often resemble neural receptive field profiles found in the mammalian visual system. From modelling efforts in the neuroscience and signal processing communities we know this architecture generates efficient (low bit-rate) representations of natural images called sparse codes. In this work, I investigate the computational role of sparse codes by applying DP to solve the optimal control problem of tracking an object (dragonfly) over a sequence of natural images. By comparing speed of learning, memory capacity, interference, generalization, and fault tolerance for different codes, I will show some distinct computational advantages of sparse codes.


Biography: Peter Loxley is in statistical physics and nonlinear dynamics (he was originally a theoretical condensed-matter physicist). His current interests include algorithms, neural coding and information theory, and probabilistic models in machine learning. He completed his postdoc at the University of Sydney, and at the CNLS. He is now at the University of New England (Australia) where he teaches mathematics and computer science disciplines. He is currently on sabbatical at UC Berkeley, and University of Manchester (UK).



Host: Information Science and Technology Institute (ISTI)