地点:行健楼学术活动室665
邀请人:蔡邢菊 副教授
摘要:
In this talk, we use monotone operator theory to derive and analyze a wide range of classical and modern convex optimization algorithms, including stochastic (randomized), parallel, distributed, and decentralized methods that are well-suited for large-scale and big data problems. The list covers algorithms includes proximal-gradient, (proximal) method of multipliers, alternating minimization, PDHG, Chambolle Pock, (standard, proximal, and linearized) ADMM, and PD3O. Finite-sum and block-coordinate-friendly properties are used to develop parallel and asynchronous methods. These methods are presented in a unified and streamlined manner using only a few mathematical concepts from monotone operator theory.
Talk Co-author: Ernest K. Ryu.