Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
主 题: Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
报告人: Prof. Shiqian Ma (CUHK)
时 间: 2015-11-25 19:00-20:00
地 点: 北京国际数学研究中心全9教室
In this talk, we discuss stochastic quasi-Newton methods for nonconvex stochastic optimization. We assume that only stochastic information of the gradients of the objective function is available via a stochastic first-order oracle (SFO). We firstly propose a general framework for stochastic quasi-Newton methods solving such kind of problems. This type of methods extend the classic quasi-Newton method for deterministic optimization problems to a stochastic setting with stochastic information of the function being used. Secondly, we propose a general framework for a class of randomized stochastic quasi-Newton methods in which the number of iterations conducted by the algorithm is a random variable. The worst-case SFO-calls complexities of these methods are analyzed. Thirdly, we propose a specific algorithm that fall into this framework: stochastic damped L-BFGS method. Finally, we report some preliminary numerical results that demonstrate the efficiency of the proposed algorithms.