报告地点:腾讯会议 363-702-020
报告摘要:The exploration of inaccurate information in optimization is becoming indispensable for tackling some recent challenges arising from machine learning and artificial intelligence. This urges us to renew the theory of some classical optimization methods. We investigate the behavior of trust region and line search methods assuming the objective function is smooth yet the gradient information available is inaccurate. In particular, we find the largest set of admissible inaccurate gradients that can ensure the global convergence of trust region methods, study the surprisingly simple geometry of this set, and point out the duality between its metric and that of the trust region. A similar set exists in line search methods. We also demonstrate the observability this admissible set in numerical computation.
报告人简介:张在坤博士 2007 年本科毕业于吉林大学,2012 年博士毕业于中国科*女王调教-女王调教视频-女王 调教小说, 目前任香港理工大学应用数学系助理教授。张在坤博士主要研究主要研究无导数优化方法,基于不精确信息的方法,随机化方法等。他主持香港-法国 PROCORE 研究项目一项,香港研究资助局 ECS 项目一项,GRF 项目两项,研究工作发表于 Mathematical Programming, SIAM Journal on Optimization, and SIAM Journal on Scientific Computing 等杂志。