| Lab Home | Phone | Search | ||||||||
|
||||||||
|
High-dimensional and non-convex optimization remains a challenging and important problem across a wide range of scientific disciplines, such as machine learning, data assimilation, and partial differential equation (PDE) constrained optimization. After discussing motivation and background on optimization, I will introduce nonlinear splitting for gradient-based optimization, which is designed to improve stability and efficiency in optimization by allowing a gradient descent iteration to be semi-implicit in the optimization parameters, while providing the flexibility to account for nonlinear coupling between the parameters. Furthermore, this framework is compatible with acceleration techniques and gradient-based optimizers, such as NAG, Adam, L-BFGS, Anderson acceleration, etc.I will focus the talk on the setting of unconstrained optimization (although it is applicable in the setting of constrained optimization) and discuss various theoretical and numerical results, concluding with preliminary work on stochastic nonlinear splitting with application to training neural networks. Host: Jeremy Lilly | ||||||||