Skip to content
This repository has been archived by the owner on Aug 26, 2022. It is now read-only.

Latest commit

 

History

History
164 lines (117 loc) · 8.99 KB

README.md

File metadata and controls

164 lines (117 loc) · 8.99 KB

machine-learning-applied-to-cfd

Important note: the code in this repository is mostly outdated. To find actively maintained and updated code examples covering a large range of applications, refer to the ml-cfd-lecture repository.

Outline

  1. Introduction
  2. Dependencies
    1. Dependencies for Jupyter notebooks
    2. Running notebooks locally
    3. Running notebooks with Colaboratory
    4. Dependencies for OpenFOAM cases and apps
  3. Examples
    1. Supervised learning
    2. Unsupervised learning
    3. Reinforcement learning
    4. Application to CFD
  4. How to reference
  5. Useful links
  6. Other repositories with related content
  7. Contributors

Introduction

This repository contains examples of how to use machine learning (ML) algorithms in the field of computational fluid dynamics (CFD). ML algorithms may be applied in different steps during a CFD-based study:

  • pre-processing, e.g., for geometry or mesh generation
  • run-time, e.g., as a dynamic boundary condition or as a subgrid-scale model
  • post-processing, e.g., to create substitute models or to analyze results

Another possible categorization is to distinguish the type of machine learning algorithm, e.g.

  • supervised learning: the algorithm creates a mapping between given features and labels, e.g., between the shape of a truck and the drag force acting on it
  • unsupervised learning: the algorithm finds labels in the data, e.g., if two particles p1 and p2 are represented by some points on their surface (there is only a list of points, but it is not known to which particle they belong), the algorithm will figure out for each point whether it belongs to p1 or p2
  • reinforcement learning: an agent acting in an environment tries to maximize a (cumulative) reward, e.g., an agent setting the solution control of a simulation tries to finish the simulation as quickly as possible, thereby learning to find optimized solution controls for a given set-up (agent: some program modifying the solver settings; environment: the solver reacting on the changes in the settings; reward: the inverse of the time required to complete one iteration)

Dependencies

Dependencies for Jupyter notebooks

Currently, there are two supported ways to execute the Jupyter notebooks contained in the notebooks folder:

  1. via a local installation of Anaconda
  2. via Google Colab (cloud-based)

Both approaches allow to run the notebooks interactively and to save results.

Running notebooks locally

The notebooks use the following Python packages, which can all be installed via pip or conda:

  • Anaconda, Python 3.x version (Link)
  • NumPy v1.16, Pandas v0.24.2, Matplotlib v2.2.2, PyTorch v1.0.0, Scikit-Learn 0.19.1 or later versions

To install all packages using pip, run

pip3 install numpy matplotlib pandas scikit-learn

or using the conda installer, run

conda install numpy matplotlib pandas scikit-learn

For PyTorch, it is best to use the graphical selection tool. Example install commands might be

# using pip
pip3 install torch torchvision
# using conda
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch

for systems with Cuda support, or

# using pip
pip3 install https://download.pytorch.org/whl/cpu/torch-1.1.0-cp36-cp36m-linux_x86_64.whl
pip3 install https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp36-cp36m-linux_x86_64.whl
# using conda
conda install pytorch-cpu torchvision-cpu -c pytorch

for systems without GPU acceleration.

Running notebooks with Colaboratory

Running notebooks in colab requires to have a Google account (the same account as for Gmail, Google Drive, etc.). Note, that it is also possible to display the notebooks without having an account (but without interactivity). After logging in to colab, notebooks can be directly imported from Github (from this repository):

  • File -> Open notebook...
  • Select the GITHUB tab
  • Search for AndreWeiner
  • Select the notebook you want to import
  • Click on COPY TO DRIVE

Without the last step, you will still be able to run and modify most of the cells in the notebooks, but you will not be able to run cells which store intermediate results, e.g., model weights. The import windows should look similar to the following:

drawing

Dependencies for OpenFOAM cases and apps

Running and compiling OpenFOAM+PyTorch applications is enabled via a special Docker image. The Dockerfile to build the image is also available on Github. First, install the latest version of Docker (Ubuntu, CentOS). The image is hosted on Dockerhub and can be downloaded by running

docker pull andreweiner/of_pytorch:of1906-py1.1-cpu

Currently, there is only a version with cpu support. To create and run a new container, go to the OpenFOAM folder and execute the runContainer.sh script:

cd OpenFOAM
./runContainer.sh

To compile or run applications, execute the scripts provided in the respective folders from within the container.

Examples

Supervised learning

Unsupervised learning

Reinforcement learning

Application to CFD

How to reference

If you found useful examples in this repository, you may consider referring to the following article:

@article{doi:10.1002/ceat.201900044,
author = {Weiner, Andre and Hillenbrand, Dennis and Marschall, Holger and Bothe, Dieter},
title = {Data-driven subgrid-scale modeling for convection-dominated concentration boundary layers},
journal = {Chemical Engineering \& Technology},
}

Useful links

Other repositories with related content

Contributors