机器学习实验室博士生系列论坛——On the continuous-time dynamics of optimization methods: theory and insights

2021-06-09 15:10-16:10 北大静园六院一楼大会议室 & 腾讯会议 498 2865 6467


For most machine learning problems, minimizing an objective function is a step of key importance. Many optimization algorithms are designed to solve various optimization problems. 
Recently, a growing number of works find that there exist some interesting connections between the minimization trajectory of an optimization algorithm and its continuous-time dynamics. Once these connections are constructed in a rigorous way, a natural idea is to use tools and techniques in dynamical system, stochastic calculus and optimal control theory (such as Lyapunov function, Euler-Lagrangian equation and Girsanov transformation et. al.) to analyze the optimization algorithm. Along these lines, we can obtain sharper convergence rate for some optimization methods as well as give reasonable explanations for curious phenomenons observed in practice. The perspective of continuous-time dynamics also provides heuristic insights for developing new optimization schemes that can be be faster, more accurate and more robust for varying optimization problems. In this talk, we will present three perspectives of the applications of continuous-time perspective in the optimization field, which are deterministic optimization, stochastic optimization and implicit regularization, respectively. For each part we will investigate several papers.