Nothing Special   »   [go: up one dir, main page]


The Symbiosis of Deep Learning and Differential Equations (DLDE) - II

NeurIPS 2022 Workshop

December 9th, 07:00 AM to 14:00 PM EST, Virtual Workshop

Submit your article for review [here]
Submit your questions to the speakers [here]


Introduction

In recent years, there has been a rapid increase of machine learning applications in computational sciences, with some of the most impressive results at the interface of deep learning (DL) and differential equations (DEs). DL techniques have been used in a variety of ways to dramatically enhance the effectiveness of DE solvers and computer simulations. These successes have widespread implications, as DEs are among the most well-understood tools for the mathematical analysis of scientific knowledge, and they are fundamental building blocks for mathematical models in engineering, finance, and the natural sciences. Conversely, DL algorithms based on DEs--such as neural differential equations and continuous-time diffusion models--have also been successfully employed as deep learning models. Moreover, theoretical tools from DE analysis have been used to glean insights into the expressivity and training dynamics of mainstream deep learning algorithms.

This workshop will aim to bring together researchers with backgrounds in computational science and deep learning to encourage intellectual exchanges, cultivate relationships and accelerate research in this area. The scope of the workshop spans topics at the intersection of DL and DEs, including theory of DL and DEs, neural differential equations, solving DEs with neural networks, and more.


Important Dates

Submission Deadline October 1st, 2022 - Anywhere on Earth (AoE)
Final Decisions October 17th, 2022 - AoE
Workshop Date December 9th, 2022 - AoE

Call for Extended Abstracts

We invite high-quality extended abstract submissions on the intersection of DEs and DL. Some examples (non-exhaustive list):

  • Using differential equation models to understand and improve deep learning algorithms:
    • Incorporating DEs into existing DL models (neural differential equations, diffusion models, ...)
    • Analysis of numerical methods for implementing DEs in DL models (trade-offs, benchmarks, ...)
    • Modeling training dynamics using DEs to generate theoretical insights and novel algorithms.
  • Using deep learning algorithms to create or solve differential equation models:
    • DL methods for solving high-dimensional, highly parameterized, or otherwise challenging DE models.
    • Learning-augmented numerical methods for DEs (hypersolvers, hybrid solvers ...)
    • Specialized DL architectures for solving DEs (neural operators, PINNs, ...).

Submission:
Accepted submissions will be presented during joint poster sessions and will be made publicly available as non-archival reports, allowing future submissions to archival conferences or journals. Exceptional submissions will be either selected for four 15-minute contributed talks or eight 5-minute spotlight oral presentations.

Submissions should be up to 4 pages excluding references, acknowledgements, and supplementary material, and should be DLDE-NeurIPS format and anonymous. Long appendices are permitted but strongly discouraged, and reviewers are not required to read them. The review process is double-blind.

We also welcome submissions of recently published work that is strongly within the scope of the workshop (with proper formatting). We encourage the authors of such submissions to focus on accessibility to the wider NeurIPS community while distilling their work into an extended abstract. Submission of this type will be eligible for poster sessions after a lighter review process.

Authors may be asked to review other workshop submissions.

Please submit your extended abstract to this address.

If you have any question, send an email to : [dlde.workshop@gmail.com]

If you have any question for the Invited Speakers, send it through this form.


Schedule

(Pacific Time)

  • 04:00 : Introduction and opening remarks
  • 04:10 : Provable Active Learning of Neural Networks for Parametric PDEs (Spotlight)
  • 04:25 : PIXEL: Physics-Informed Cell Representations for Fast and Accurate PDE Solvers (Spotlight)
  • 04:40 : Bridging the Gap Between Coulomb GAN and Gradient-regularized WGAN (Spotlight)
  • 04:55 : How PINNs cheat: Predicting chaotic motion of a double pendulum (Spotlight)
  • 05:10 : Poster Session 1
  • 06:05 : Yang Song (Keynote Talk)
  • 06:50 : Blind Drifting: Diffusion models with a linear SDE drift term for blind image restoration tasks (Spotlight)
  • 07:05 : Break
  • 08:05 : Rose Yu (Keynote Talk)
  • 08:50 : A Universal Abstraction for Hierarchical Hopfield Networks (Spotlight)
  • 09:05 : Poster Session 2
  • 10:00 : Christopher Rackauckas (Keynote Talk)
  • 10:45 : Closing remarks

Invited Speakers


Chris Rackauckas (confirmed) is the Co-PI of the Julia Lab at the Computer Science and AI Laboratory (CSAIL) of the Massachussets Institute of Science and Technology (MIT), as well as the director of Modeling and Simulation at Julia Computing and of Scientific Research at Pumas-AI. His research spans topics in numerical methods for differential equations, scientific machine learning and high-performance computing. He is the lead developer of the DifferentialEquations.jl solver suite along with over a hundred other Julia packages, earning him the inaugural Julia Community Prize, an outstanding paper award at the IEEE-HPEC conference on computational derivation for the efficient stochastic differential equation solvers, and front page features on many tech community sites. [Webpage]


Rose Yu (confirmed) is an assistant professor at UC San Diego department of Computer Science and Engineering and Halıcıo ̆glu Data Science Institute. She is a primary faculty with the AI Group and is affiliated with Contextual Robotics Institute, Bioinformatics and Systems Biology, and Center for Machine-Integrated Computing and Security. Her research interests lie primarily in machine learning, especially for large-scale spatiotemporal data. She is generally interested in deep learning, optimization, and spatio-temporal reasoning, and particularly excited about the interplay between physics and machine learning. Her work has been applied to learning dynamical systems in sustainability, health and physical sciences. [Webpage]


Yang Song (confirmed) is a final year Ph.D. student of Computer Science at Stanford University. His work focuses on the foundations of deep generative models, as well as their applications to AI safety, inverse problems, and their impacts on the broader field of machine learning. He has made several key contributions to the mathematical formulation and empirical performance of continuous-time score-based diffusion models, currently state--of--the--art methods in deep generative modeling. [Webpage]


Recordings

The workshop will be broadcasted via Zoom and poster sessions will be on Gathertown. We will upload the recordings on YouTube.

Accepted Papers

[Spotlight Papers]
[Poster Papers]

Organizers

Animesh Garg
University of Toronto, Vector Institute, NVIDIA
David Duvenaud
University of Toronto, Vector Institute
Winnie Xu
University of Toronto
Google Brain
Archis Joglekar
University of Michigan,
Syntensor
Michael Poli
Stanford University
Patrick Kidger
Google X
Martin Magill
Borealis AI
Stefano Massaroli
University of Tokyo, RIKEN
Thor Jonsson
University of Guelph
Maryam Hosseini
Université de Sherbrooke
Kelly Buchanan
Columbia University
Qiyao Wei
University of Cambridge
Ermal Rrapaj
University of California, Berkley
Luca Herranz-Celotti
Université de Sherbrooke


Acknowledgments

Thanks to visualdialog.org for the webpage format. Thanks to whatsonmyblackboard for the cozy blackboard photo.

References

[1] M. Raissi, P. Perdikaris and G.E. Karniadakis Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 2019.
[2] D. Kochkov, J.A. Smith, A. Alieva, Q. Wang, M.P. Brenner, and S. Hoyer Machine learning–accelerated computational fluid dynamics. PNAS, 2021.
[3] E. Haber and L. Ruthotto Stable architectures for deep neural networks. IOPScience, 2017.
[4] R.T.Q. Chen, Y. Rubanova, J. Bettencourt, D. Duvenaud Neural Ordinary Differential Equations. NeurIPS, 2018.
[5] G. Carleo, M. Troyer Solving the quantum many-body problem with artificial neural networks. Science, 2017.
[6] P. Chaudhari, S. Soatto Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. ICLR, 2018.
[7] Q. Li, C. Tai, W. E Stochastic Modified Equations and Adaptive Stochastic Gradient Algorithms. ICML, 2017.
[8] W. E, C. Ma & L. Wu The Barron Space and the Flow-Induced Function Spaces for Neural Network Models. Constructive Approximation, 2021.
[9] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar Fourier Neural Operator for Parametric Partial Differential Equations. ICLR, 2021.
[10] J. Sirignano, K. Spiliopoulos DGM: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics, 2018.
[11] L. Lu, X. Meng, Z. Mao, G.E. Karniadakis DeepXDE: A Deep Learning Library for Solving Differential Equations. Society for Industrial and Applied Mathematics, 2021.
[12] S. Mishra, R. Molinaro Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs. IMA Journal of Numerical Analysis, 2021.
[13] S. Bai, J.Z. Kolter, V. Koltun Deep Equilibrium Models. NeurIPS, 2019.
[14] N. Yadav, K.S. McFall, M. Kumar, J.H. Kim A length factor artificial neural network method for the numerical solution of the advection dispersion equation characterizing the mass balance of fluid flow in a chemical reactor. Neural Computing and Applications, 2018.
[15] C. Beck, S. Becker, P. Grohs, N. Jaafari, A. Jentzen Solving the Kolmogorov PDE by Means of Deep Learning. Journal of Scientific Computing, 2021.