Using machine learning models in black-box and derivative free optimization
主 题: Using machine learning models in black-box and derivative free optimization
报告人: Prof. Katya Scheinberg (Lehigh University)
时 间: 2014-07-10 11:10-12:10
地 点: Room 29 at Quan Zhai, BICMR(主持人:文再文)
All derivative free methods rely on sampling the objective function at one or more points at each iteration. Constructing and maintaining these sample sets has been one of the most essential issues in DFO. Majority of the existing results rely on deterministic sampling techniques.
We will discuss the new developments for using randomized sampled sets within the DFO framework. Randomized sample sets have many advantages over the deterministic sets. In particular, it is often easier to enforce "good" properties of the models with high probability, rather than the in the worst case. In addition, randomized sample sets can help automatically discover a good local low dimensional approximation to the high dimensional objective function. We will demonstrate how compressed sensing results can be used to show that reduced size random sample sets can provide full second order information under the assumption of the sparsity of the Hessian.
We will discuss convergence theory developed for the randomized models where we can, for instance, show that as long as the models are "good" with probability more than 1/2 then our trust region framework is globally convergent with probability 1 under standard assumptions. Some new convergence rate results and extensions to the use of a broader class of machine learning models will also be discussed.