报告人:Prof. Jan Hesthaven (EPFL-SB-MATH-MCSS)
时间:2019-05-08 09:30-10:30
地点:Room 1513, Sciences Building No. 1
Abstract: While the success of modern simulation techniques is undisputed, many of the most successful schemes rely on the tuning of parameters, often in a nonlinear and spatio-temporal fashion to ensure optimal algorithm performance. Classic examples include the detection of bad cells when one solves conservation laws with high-order methods, the adaptive selection of stencils in ENO/WENO methods or general adaptive methods. Numerous techniques exist to address such challenges and often relies on the expression of rules to drive the dynamic process. This is often made more complex and problem dependent by relying on one or several parameters that require tuning. Such elements are all characterized as playing a central role in an otherwise successful algorithm.
In this talk we discuss the potential of using ideas of machine learning to overcome such bottlenecks and augment the algorithm to allow the choice of optimal local parameters and eliminate critical performance bottlenecks, resulting vastly improved performance of the augmented algorithm. The central approach is to carefully identify such bottlenecks and explore data driven solutions but otherwise maintain the integrity of a well-tested method.
After a brief introduction to machine learning techniques and, in particular, neural networks, we demonstrate the success of this general idea through a number of specific examples, primarily motivated by challenges associated with the numerical solution of conservation laws.
While exemplified through specific examples, the overall philosophy is general and we conclude the talk with a more general discussion of the potential of augmenting algorithms by machine learning, resulting in a class of methods which can perhaps be referred to as precision algorithms.
This work has been done in collaboration with N. Discacciati, D. Ray and Q. Wang.