All Articles

Accelerating Convergence of Stochastic Gradient MCMC: algorithm and theory

Author: | Image:

ISU PADS Seminar Speaker: Qi Fent (USC)

See website for Zoom link: https://sites.google.com/site/ruoyuwu90/pads-seminar

Abstract: Stochastic Gradient Langevin Dynamics (SGLD) shows its advantages in multi-modal sampling and non-convex optimization, which implies broad application in machine learning, e.g. uncertainty quantification for AI safety problems, etc. The core issue in this field is about the acceleration of the SGLD algorithm and convergence rate of the continuous time Langevin diffusion process to its invariant distribution. In this talk, I will present the general idea of entropy dissipation and convergence rate analysis. In the first part, I will show stochastic gradient MCMC algorithms based on replica exchange Langevin dynamics, and empirical studies of our algorithm on optimization and uncertainty estimates for synthetic experiments and image data. In the second part, I will talk about the convergence rate analysis of reversible/non-reversible degenerate Langevin dynamics (i.e. variable coefficient underdamped Langevin dynamics). The talk is based on a series of joint works with W. Deng, L. Gao, G. Karagiannis, W. Li, F. Liang, and G. Lin.