主 题: Stochastic Methods for Convex Optimization with Difficult Constraints
报告人: Mengdi Wang (Assistant Professor of Operations Research, Princeton University)
时 间: 2014-07-04 10:30-11:30
地 点: Room 1114, Science Bldg 1(主持人:姚远)
Convex optimization problems involving large-scale data or expected values are challenging, especially when these difficulties are associated with the constraint set. We propose random algorithms for such problems, and focus on special structures that lend themselves to sampling, such as when the constraint is the intersection of many simpler sets, involves a system of inequalities, or involves expected values, and/or the objective function is an expected value, etc. We propose a class of new methods that combine elements of successive projection, stochastic gradient descent and proximal point algorithm. This class of methods also contain as special cases many known algorithms. We use a unified analytic framework to prove their almost sure convergence and the rate of convergence. Our framework allows the random algorithms to be applied with various sampling schemes (e.g., i.i.d., Markov, sequential, etc), which are suitable for applications involving distributed implementation, large data set, computing equilibriums, or statistical learning.