Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Executive Committee 
 Postdocs 
 Visitors 
 Students 
 Research 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 P/T Colloquia 
 Archive 
 Ulam Scholar 
 
 Postdoc Nominations 
 Student Requests 
 Student Program 
 Visitor Requests 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Thursday, March 18, 2021
11:00 AM - 12:00 PM
WebEx

Seminar

OPTML Seminar: Invertible Generalized Synchronization: a General Principle for Dynamical Learning in Neural Networks

Zhixin Lu
University of Pennsylvania

The human brain is a complex, nonlinear dynamical system that can swiftly learn various dynamical tasks from exemplary sensory input. Similarly, a reservoir computer (RC), a type of recurrent neural network, can be trained with historical data from a dynamical system and predict its future, even without knowing the underlying model. Could the human brain and the RC share a common learning mechanism? Can we build artificial learning systems that emulate human's learning ability? To shed light on these questions, I propose a universal, biologically plausible learning principle—invertible generalized synchronization (IGS). With the IGS, neural networks such as RCs can learn complex dynamics in a model-free manner through attractor embedding. The proposed IGS can support many other human-like learning functions. For example, I show that a single neural network can simultaneously learn multiple dynamical attractors—periodic, quasi-periotic, and chaotic. In reminiscent of human cognitive functions, the post-learning neural network can switch between learned tasks autonomously or induced by external cues. By leveraging the IGS, I also demonstrated that a neural network could infer values of the unmeasured dynamical variables, execute dynamical source separation, and even infer unseen dynamical bifurcations. The IGS is general enough to be applicable across many physical devices beyond traditional neural networks. For example, I demonstrated that a tank of water could utilize the IGS and learn to separate chaotic signals from different sources. Together, these results provide a powerful mechanism by which dynamical learning systems can perform many human-like learning abilities, allowing for the principled study and precise design of dynamical system-based AI.

Host: Anatoly Zlotnik