Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Executive Committee 
 Postdocs 
 Visitors 
 Students 
 Research 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 P/T Colloquia 
 Archive 
 Ulam Scholar 
 
 Postdoc Nominations 
 Student Requests 
 Student Program 
 Visitor Requests 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Tuesday, July 23, 2019
10:00 AM - 12:00 PM
TA-3-1698-A103 MSL Auditorium

Seminar

Interactive Tutorial: The PyTorch Library for Deep Learning

Arvind Mohan (CCS-2) & Nicholas Lubbers (CCS-3)

Have you been interested in training and using Deep Neural Networks (DNNs)? In this two-part tutorial presentation, we will present on what PyTorch is and how to use it, with a focus on live examples using Jupyter Notebooks. Though not mandatory, attendees are encouraged to bring their laptops. For more advanced topics, we provide an overview and links for attendees to learn more.

Session I – Pytorch: The Fundamentals - 50 mins.
1) What is PyTorch? (15 min)
-What is PyTorch? When, who and why.
-Comparison with Tensorflow, Keras, and other deep learning frameworks.
-Tensors and Data types
-Similarities and Differences with NumPy
-Understanding documentation & source code- GPU Computing in python with PyTorch

2) Neural Networks in PyTorch: Architecture and Implementation (15 min)
-Basic structure of a NN in PyTorch
-Classes in PyTorch: Modules, parameters, and optimizers, Oh my!
-Structure of tensors and other common conventions in PyTorch.
-Backpropagation basics- Saving and loading models

3) Convolutional Neural Networks (CNNs) (15 min)
-Structure of and basics of CNNs
-Types and use cases of convolutions
-Example of CNN and training- Caveats and tips for practical use

Questions & Discussion (5 min)

== 10/15 min Coffee break ==

Session II – PyTorch: Dive in and Finer points - 50 mins.
4) Recurrent Neural Networks (RNNs) (15 min)
-Structure and basics RNNs.
-PyTorch implementation – introduce various RNN implementations and use cases.
-Importance of hidden state and ways to initialize it (manual and automatic)
-Example of RNN/LSTM and training- Caveats and tips for practical use

5) Backpropation/Automatic Differentiation in Pytorch: (15 min)
- What is Autograd? Why is it important for NNs?
- Forward and Reverse mode Autograd
- PyTorch implementation of AD & differences from other implementations
- Examples and use cases
- Verifying the accuracy and consistency of AD- When and how to extend Autograd with custom Autograd Functions

6) Best Practices and Advanced Topics - Overview (15 min)
- Loading CUDA model on CPU for inference.
- Using Dataloaders for efficiency. Explain CPU load time and GPU compute time lags, and how to choose the right number of workers to load data.
- Writing Custom Dataloaders for your datasets- CPU/GPU device agnostic code.
- Torch NN vs functional – differences and use cases.
- Dynamic graph adjustments
- Moving tensors between devices
- Parallel and Distributed training: useful resources to get started with.
- Mixed precision/half precision training and memory savings + issues to be aware of
- Developing custom NN architectures

Questions & Discussion (5 min)

Host: Arvind Mohan (CCS-2) & Nicholas Lubbers (CCS-3)