Lab Home | Phone | Search | ||||||||
|
||||||||
The calculation of ergodic static and dynamic averages using Markov Chain Monte Carlo or Hamiltonian or Langevin dynamics in complex systems is important in fields from network science to Bayesian inference to condensed matter physics. However, these calculations often suffer from long correlations and long-lived biases dependent on initial conditions. This is especially true when the ergodic distribution is strongly multimodal, as when a protein cycles through multiple functional states, a solid can have different surface structures, or an inference is divided between multiple reasonable explanations for data. Metadynamics is a widely-used method that enhances sampling across modes in such calculations by adaptively learning and biasing the free energy (log-probability) surface in a reduced set of collective coordinates to ameliorate the sampling barriers preventing efficient exploration of the model's state space. However, despite over a decade of wide use and success, no proof of convergence existed and there was little formal understanding of the design criteria for the method. This talk will show why metadynamics works, describe design criteria that emerge from our proof of convergence, and discuss new methods inspired by these design criteria. The proof is general, with conditions that hold in almost all practical systems to which metadynamics has been applied and that explain the known pathological cases. Design insight from the proof inspired one new method for statics calculations that consistently and significantly improves upon the state of the art in initial tests and inspired a variety of further variants for accurate statics and dynamics calculations that we have yet to test. Metadynamics is exceptionally simple to implement and to use, and we hope this new foundation for understanding its previous successes and future possibilities will inspire wider application of the method and bolder innovations upon it. Host: Gregory A. Voth |