Deep neural networks for parametric PDEs

Abstract: With remarkable successes in many fields, the deep neural networks (DNN) have shown great

capacity in approximating high-dimensional nonlinear maps. We use the DNN as a tool to solve parameterized

PDEs by representing the map between the PDE coefficient and the solution. In order to construct a

compressed DNN architecture for nonlinear pseudo-differential operators, we extended the concept of the

multiscale method and multiresolution method, for example, hierarchical matrices, fast multipole method and

nonstandard wavelet form in linear algebra to DNN. These new DNN architectures took full advantage of the

data sparsity structure of the Green functions, thus compared with a fully connected convolutional neural

networks, they have fewer parameters, are easier to train, and require less training data. Application on

classical PDEs demonstrates the efficiency of these new architectures by approximating nonlinear maps that arise in computational physics and computational chemistry.

 

Brief introduction: Yuwei Fan is a post-doc at the Department of Mathematics, Stanford University,

working with Lexing Ying. He has obtained his Ph.D. under the supervision of Ruo Li at Peking University, China.