Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing
Use TensorDiffEq if you require:
What makes TensorDiffEq different?
Completely open-source
Self-Adaptive Solvers for forward and inverse problems, leading to increased accuracy of the solution and stability in training, resulting in
less overall training time
Multi-GPU distributed training for large or fine-grain spatio-temporal domains
Built on top of Tensorflow 2.0 for increased support in new functionality exclusive to recent TF releases, such as XLA support,
autograph for efficent graph-building, and grappler support
for graph optimization* - with no chance of the source code being sunset in a further Tensorflow version release
Intuitive interface - defining domains, BCs, ICs, and strong-form PDEs in “plain english”
*In development
If you use TensorDiffEq in your work, please cite it via:
@article{mcclenny2021tensordiffeq,
title={TensorDiffEq: Scalable Multi-GPU Forward and Inverse Solvers for Physics Informed Neural Networks},
author={McClenny, Levi D and Haile, Mulugeta A and Braga-Neto, Ulisses M},
journal={arXiv preprint arXiv:2103.16034},
year={2021}
}
@marcelodallaqua, @ragusa, @emiliocoutinho