ΦFlow is an open-source simulation toolkit built for optimization and machine learning applications. It is written mostly in Python and can be used with NumPy, PyTorch, Jax or TensorFlow. The close integration with these machine learning frameworks allows it to leverage their automatic differentiation functionality, making it easy to build end-to-end differentiable functions involving both learning models and physics simulations.
Backward facing step | Heat flow | Mesh construction | Wake flow |
SPH | FLIP | Streamlines | Terrain |
Gravity | Billiards | Ropes |
Gradient Descent | Optimize throw | Learning to throw | PIV |
Close packing | Learning Φ(x,y) | Differentiable pressure |
Installation with pip on Python 3.6 and above:
$ pip install phiflow
Install PyTorch, TensorFlow or Jax in addition to ΦFlow to enable machine learning capabilities and GPU execution. To enable the web UI, also install Dash. For optimal GPU performance, you may compile the custom CUDA operators, see the detailed installation instructions.
You can verify your installation by running
$ python3 -c "import phi; phi.verify()"
This will check for compatible PyTorch, Jax and TensorFlow installations as well.
- Tight integration with PyTorch, Jax and TensorFlow for straightforward neural network training with fully differentiable simulations that can run on the GPU.
- Built-in PDE operations with focus on fluid phenomena, allowing for concise formulation of simulations.
- Flexible, easy-to-use web interface featuring live visualizations and interactive controls that can affect simulations or network training on the fly.
- Object-oriented, vectorized design for expressive code, ease of use, flexibility and extensibility.
- Reusable simulation code, independent of backend and dimensionality, i.e. the exact same code can run a 2D fluid sim using NumPy and a 3D fluid sim on the GPU using TensorFlow or PyTorch.
- High-level linear equation solver with automated sparse matrix generation.
Documentation Overview • ▶ YouTube Tutorials • API • Demos • Playground
Φ-Flow builds on the tensor functionality from ΦML. To understand how ΦFlow works, check named and typed dimensions first.
- ΦFlow to Blender
- What to Avoid: How to keep your code compatible with PyTorch, TensorFlow and Jax
- Legacy visualization & Dash & Console
- Legacy physics overview
Please use the following citation:
@inproceedings{holl2024phiflow,
title={${\Phi}_{\text{Flow}}$ ({PhiFlow}): Differentiable Simulations for PyTorch, TensorFlow and Jax},
author={Holl, Philipp and Thuerey, Nils},
booktitle={International Conference on Machine Learning},
year={2024},
organization={PMLR}
}
We will upload a whitepaper, soon. In the meantime, please cite the ICLR 2020 paper.
- Learning to Control PDEs with Differentiable Physics, Philipp Holl, Vladlen Koltun, Nils Thuerey, ICLR 2020.
- Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers, Kiwon Um, Raymond Fei, Philipp Holl, Robert Brand, Nils Thuerey, NeurIPS 2020.
- ΦFlow: A Differentiable PDE Solving Framework for Deep Learning via Physical Simulations, Nils Thuerey, Kiwon Um, Philipp Holl, DiffCVGP workshop at NeurIPS 2020.
- Physics-based Deep Learning (book), Nils Thuerey, Philipp Holl, Maximilian Mueller, Patrick Schnell, Felix Trost, Kiwon Um, DiffCVGP workshop at NeurIPS 2020.
- Half-Inverse Gradients for Physical Deep Learning, Patrick Schnell, Philipp Holl, Nils Thuerey, ICLR 2022.
- Scale-invariant Learning by Physics Inversion, Philipp Holl, Vladlen Koltun, Nils Thuerey, NeurIPS 2022.
ΦFlow has been used in the creation of various public data sets, such as PDEBench and PDEarena.
See more packages that use ΦFlow
The Version history lists all major changes since release. The releases are also listed on PyPI.
Contributions are welcome! Check out this document for guidelines.
This work is supported by the ERC Starting Grant realFlow (StG-2015-637014) and the Intel Intelligent Systems Lab.