HOME Overview Optimization DL Models Phyiscs
Jeans

Yiping Lu.

Ph.D. student

Institute for Computational and Mathematical Engineering
School Of Engineering
Stanford University

Email: yplu [at] stanford [dot] edu


Yiping Lu.

Ph.D. student

Institute for Computational and Mathematical Engineering
School Of Engineering
Stanford University

Email: yplu [at] stanford [dot] edu


Continuous Depth Neural Network


Overview.

To understand and improve the success of deep Residual Network, my research tends to limiting the depth of a Residual Network to infinity and comes out an ODE Model.

  • Deep learning is formulated as a discrete-time optimal control problem. Our research bridging optimal control and optimization of deep neural networks.
  • Utilizing the knowledge in numerical analysis, one can design a Neural Network statisfying the property they aim to enjoy. Examples can be
    • Enforcing stability to have a robust mdoel.
    • Enforcing model to go through an optimal transport path.
    • Enforing Physic Constraints.

Optimization


How Continouos Depth Model Helps Understanding Optimization Of Deep Networks.

Using the ODE model can exploit the structure of Neural network while analysising the optimization. Based on Mean-field ResNet paper published at ICML2020 and YOPO paper published at NeurIPS2019.

Theory

Yiping Lu*, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying. "A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth" Thirty-seventh International Conference on Machine Learning (ICML), 2020

Short version presented at ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations. (Oral)

[ paper] [ arXiv] [slide] [Video]

We combined the ODE model and mean--field analysis of two-layer neural nets, we provide a convergence proof of training resnet beyond the lazy training regime. This is the first landscape result for deep neural networks in mean--field regime.

Analysis of Wasserstein Gradient flow is still an open problem.

Algorithm Design

Dinghuai Zhang*, Tianyuan Zhang*,Yiping Lu*, Zhanxing Zhu, Bin Dong. "You Only Propagate Once: Painless Adversarial Training Using Maximal Principle." (*equal contribution) 33rd Annual Conference on Neural Information Processing Systems 2019(NeurIPS2019).

[ paper] [ arXiv] [ slide] [Code] [poster]

ODE can help accelerate adversarial training!! Adversarial training doesn't need too many computational resources! We fully exploit structure of deep neural networks via recasting the adversarial training for neural networks as a differential game and propose a novel strategy to decouple the adversary update with the gradient back propagation.

4-5 times faster!

Related Works

Here are the related papers by other groups.

Principled DL Model Desing


Our Finding:

1. Network Structure = Numerical Schemes

Yiping Lu, Aoxiao Zhong, Quanzheng Li, Bin Dong. "Beyond Finite Layer Neural Network:Bridging Deep Architects and Numerical Differential Equations" Thirty-fifth International Conference on Machine Learning (ICML), 2018

[ paper] [ arXiv] [ project page] [ slide][ bibtex][ Poster]

Yiping Lu*, Zhuohan Li*, Di He, Zhiqing Sun, Bin Dong, Tao Qin, Liwei Wang, Tie-yan Liu "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View." (*equal contribution) Submitted. arXiv preprint:1906.02762

[ paper] [ arXiv] [ slide] [Code]

2. Network should adapt to Task Physics

Using inverse problem as our application, to recover data from different level of degradation. We proposed appraoch solving the regularization path, which comes to a time-varing ODE whoese discretization should be a depth varying network.

Related papers:

Xiaoshuai Zhang*, Yiping Lu*, Jiaying Liu, Bin Dong. "Dynamically Unfolding Recurrent Restorer: A Moving Endpoint Control Method for Image Restoration" Seventh International Conference on Learning Representations(ICLR) 2019(*equal contribution)

[ paper] [ arXiv] [code] [ slide] [ project page][Open Review]

In this paper, we propose a new control framework called the moving endpoint control to restore images corrupted by different degradation levels in one model. The proposed control problem contains a restoration dynamics which is modeled by an RNN. The moving endpoint, which is essentially the terminal time of the associated dynamics, is determined by a policy network. Numerical experiments show that DURR can well generalize to images with higher degradation levels that are not included in the training stage.

Related Works

Here are the related papers by other groups.

Physics Applications


We made an initial attempt to learn evolution PDEs from data via Neural Networks.

Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDE-Net, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models. The basic idea of the proposed PDE-Net is to learn differential operators by learning convolution kernels (filters), and apply neural networks or other machine learning methods to approximate the unknown nonlinear responses.

Zichao long*, Yiping Lu*, Xianzhong Ma*, Bin Dong. "PDE-Net:Learning PDEs From Data", Thirty-fifth International Conference on Machine Learning (ICML), 2018(*equal contribution)

[ paper] [ arXiv] [ code] [ Supplementary Materials][ bibtex]

Zichao Long, Yiping Lu, Bin Dong. " PDE-Net 2.0: Learning PDEs from Data with A Numeric-Symbolic Hybrid Deep Network" Journal of Computational Physics, 399, 108925, 2019.(arXiv preprint:1812.04426)

[ paper] [ arXiv] [code] [ slide] [ proceeding]

Related Works

Contact Me


Stanford, CA, US

Phone: +86 18001847803

Email: yplu@stanford.edu


Let's get in touch. Send me a message: