October 19, 2024

Introduction to Differentiable Physics

The central goal of these methods is to use existing numerical solvers, and equip them with functionality to compute gradients with respect to their inputs. Once this is realized for all operators of a simulation, we can leverage the autodiff functionality of DL frameworks with backpropagation to let gradient information flow from a simulator into an NN and vice versa. This has numerous advantages such as improved learning feedback and generalization, as we’ll outline below.

In other words, It is going to re-implement simulation(partial differential equation) using deeplearning frame work like tensorflow or pytorch

But it leads us to a question…

https://physicsbaseddeeplearning.org/diffphys.html

why don’t we just use these operators to realize our physics solver?

Most physics solvers can be broken down into a sequence of vector and matrix operations. All state-of-the-art DL frameworks support these, so why don’t we just use these operators to realize our physics solver?

They said that

It’s true that this would theoretically be possible. The problem here is that each of the vector and matrix operations in tensorflow and pytorch is computed individually, and internally needs to store the current state of the forward evaluation for backpropagation. For a typical simulation, however, we’re not overly interested in every single intermediate result our solver produces. Typically, we’re more concerned with significant updates such as the step from u(t) to u(t+dt).

In other words, PDE solovers are not interested in intermidiate state. I am not sure if it is true or not. there might be bad aspects for optimization. For example, optimization using backprop cause gradient expolusion, and difficult to converge.