Lab Home | Phone | Search | ||||||||
|
||||||||
A conventional and efficient way to solve inference problems in dynamical systems—broadly defined by learning some “unknowns” in thedynamical systems given a set of data—is to adopt the adjoint state methods. The adjoint states contain the information of the gradients ofthe cost function with respect to the unknowns (referred to as the “sensitivities”), so can be used in local optimization procedures. However, in complex models, computing adjoint states can be numerically inefficient. Modern machine-learning (ML) procedures, on the other hand, computes the sensitivities by back-propagation, which entails generating computational graphs (also known as the expression trees) and automatic differentiation. Can back-propagation help us in these classicalinference problems? In this talk, I aim to share the preliminary results of our effort to generalize the modern ML platform (TensorFlow) for classical inferenceproblems. By revising the transfer functions of the Recurrent Neural Networks (RNN’s), we enable the architecture to carry out the temporalintegration of dynamical systems. In the models we tested, the back-propagation delivered accurate estimations of the sensitivities for the consequent optimization procedures, bypassing the necessity of computing the adjoint states. As the transfer function encodes the “physics” of the dynamical system, the end-product our analysis isphysically interpretable. I will present our results by three proof-of-concept type of inference problems for deterministic dynamical systems which encapsulate three classes of inference problems: (1) given terminal configuration and the evolution of the system, inferring the initial condition, (2) given the configuration of the system measured at (possibly sparse) discrete times, inferring the model parameters which minimizes the error of themodel prediction and the data, and (3) optimal control problems. We aim to share these preliminary results with the LANL physics-informedML community as soon as possible and ask for feedback. As such, the talk will be delivered less formally, and we welcome possible discussionsduring the talk. Host: David Métivier |