DEEP LEARNING ALGORITHMS FOR TIME-DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS
DEEP LEARNING ALGORITHMS FOR TIME-DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS
No Thumbnail Available
Date
2024
Authors
Long, Jie
Journal Title
Journal ISSN
Volume Title
Publisher
Middle Tennessee State University
Abstract
Deep learning algorithms have demonstrated encouraging outcomes in resolving Partial Differential
Equations. The advent of physics-informed Neural Networks has greatly enhanced the precision and effectiveness
of Deep Learning-based approaches for solving partial differential equations. The basic idea
of such Deep Learning algorithms is constraining the output of neural networks to satisfy the physics
laws and certain conditions by incorporating the physical laws and boundary conditions directly into the
loss function for training the neural networks. Using this technology, we propose a variant of the Physics-
Informed Neural Network to identify time-varying parameters of the Susceptible-Infectious-Recovered-
Deceased model for COVID-19 by fitting daily reported cases. The learned parameters are verified with
an ordinary differential equation solver, and the effective reproduction number is calculated. Additionally,
a Long Short-Term Memory network predicts future weekly time-varying parameters, demonstrating
the accuracy and effectiveness of combining these two models.
Then, we explore the method that can solve the partial differential equations using the sparse data.
We combine a neural network with a numerical approach to address time-dependent partial differential
equations using initial conditions and limited observed data. The Gated Recurrent Units network estimates
time iteration schemes, integrating prior knowledge of governing equations. A numerical implicit
approach is applied to calculate new time iteration schemes, with the loss function incorporating the difference
between these schemes. After that, we propose a novel physics-informed encoder-decoder gated
recurrent neural network to solve time-dependent partial differential equations without using observed
data. The encoder approximates the underlying patterns and structures of solutions over the entire spatiotemporal
domain. The approximated solution is processed by the decoder, a Gated Recurrent Units layer,
utilizing the initial condition as the initial state to retain critical information in the hidden states. Boundary
conditions are enforced in the final prediction to enhance model performance. The effectiveness of
these two methods has been validated through their application to several problems.
Additionally, we observe the traditional physics-informed neural network often fails to converge due
to imbalances in the multi-component loss function within the back-propagated gradients during training.
The standard approach to mitigate this issue involves adding appropriate weights to each component of
the loss function, but determining the correct weights is challenging. Therefore, we introduce the Self-
Learning Physics-Informed Neural Network to solve some non-linear partial differential equations. In
this method, weights are learned by separate neural networks, eliminating the need for hyper-parameter
fine-tuning. The effectiveness of our method is demonstrated by the Burgers’ and Burgers-Fisher equations.
Description
Keywords
Data Driven,
Deep Learning,
Partial Differential Equation,
Physics-informed,
Recurrent Neural Network,
Self-Learning,
Mathematics