Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Executive Committee 
 Postdocs 
 Visitors 
 Students 
 Research 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 P/T Colloquia 
 Archive 
 Ulam Scholar 
 
 Postdoc Nominations 
 Student Requests 
 Student Program 
 Visitor Requests 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Wednesday, March 28, 2018
3:00 PM - 4:00 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

Effective Learning and Control in Biologically Realistic Spiking Neural Network Simulations

Dr. Patrick Taylor
Missouri University of Science and Technology

Many varieties of neural networks excel in AI and machine learning applications. Some types of problems remain difficult for neural networks to solve, particularly rapidly time-varying stimuli such as speech or sensory processing for control. Learning rules are well-developed for feed-forward, deep, convolutional, and simple recurrent networks, which perform well for relatively static or step-wise problems such as image recognition or the game of Go. Yet, despite much desire to relate deep convolutional networks to brain function, they are almost entirely biologically unrealistic and unrepresentative of brain activity and processing. On the other hand, learning rules for naturally recurrent spiking neural networks have proven exceptionally difficult, but these models are much more representative of biological function in brains. Though few supervised learning rules for spiking networks have shown success on tasks such as clustering or dimensionality reduction, supervised and reinforcement learning rules remain grossly underdeveloped and unsuccessful. Interestingly, if input rates vary rapidly (around the time scale of the learning window) spike-timing methods can, at least in theory, be distinctly more computationally powerful than rate or sigmoidal neurons. We take a hybrid approach by synthesizing unsupervised learning rules relying only on cell-local information combined with global feedback, to produce a form of general reinforcement learning with embedded clustering. Combining a global reinforcement signal with spike timing dependent plasticity better mimics biological processes, and may out-perform existing learning systems in some types of temporally rich processing or control tasks, or in ability to generalize over problem domains. Rather than using explicit state or action learning, a basic spike timing dependent plasticity (STDP) rule, which performs a kind of asymmetric Hebbian fire-together wire-together, can be modified to perform similarly to TD learning, implicitly via an associative rule with feedback. Notably, these neurons use only cell-local information with global uniform feedback. They are some of the most biologically realistic models of cell-level learning, and mimic biological data on long-term potentiation (LTP) and depression (LTD) at synapses, bulk neurotransmission via dopamine, as well as the ionic eligibility traces necessary to implement such reinforcement learning.