Tutorials for gradient-based history matching for high-fidelity models and also using their reduced representations (with PCA and Autoencoders).
The dataset used in this demo repository is the digit-MNIST images, X, with the forward model being a linear operator G and the resulting simulated responses denoted as Y. The physical system is represented as Y=G(X) or D=G(M). More description on the dataset is available here (in readme). Here you will find demos for dimensionality reduction of the digit-MNIST images using autoencoders and PCA.
We are interested in learning the inverse mapping M=G'(D) which is not trivial if the M is non-Gaussian (which is the case with the digit-MNIST images) and G is nonlinear (in this demo we assume a linear operator). Such complex mapping (also known as history-matching) may result in solutions that are non-unique with features that may not be consistent. In gradient-based history-matching, the objective function that we want to minimize is the following, where d_obs is the field observation, m is our variable of interest and we assume that the forward operator G sufficiently represents the linear/nonlinear (i.e. multi-phase fluid flow, heat-diffusion etc) physical systems.
The simple closed-form solution:
As per this notebook, the inversion solution can reproduce the d_obs but the solution shows no realism with respect to the set of training models.
The gradient for the loss function:
The update equation:
Run the optimization process as per this notebook
The inversion solution also is not satisfactory but still can reproduce the d_obs.
The poor inversion solutions we have seen above are caused by the non-Gaussian features in the digit-MNIST dataset. In this notebook, we represent the images as PCA coefficients. Refer to this tutorial on how to do that.
Then with chain-rule, the update equation simply becomes:
The minimization process:
Here we see that the realism of the inversion solution is better preserved, at the same time it can also reproduce the d_obs.
We can also use autoencoders for dimensionality reduction as we did here.
Similar to PCA, the update equation then becomes:
Where the Jacobian (dm/dzm) is obtained from the decoder. See pending issues.