This repository contains code, datasets, and results from the paper:
Kamila Zdybał, James C. Sutherland, Alessandro Parente - Optimizing progress variables for ammonia/hydrogen combustion using encoding-decoding networks, 2024.
Data and results files will be shared separately via GoogleDrive as they take over 5GB of space.
- Script for loading data
ammonia-Stagni-load-data.py
We have used Python==3.10.13
and the following versions of all libraries:
pip install numpy==1.26.2
pip install pandas==2.1.3
pip install scipy==1.11.4
pip install scikit-learn==1.3.2
pip install tensorflow==2.15.0
pip install keras==2.15.0
You will also need our library PCAfold==2.2.0
.
Other requirements are:
pip install matplotlib
pip install plotly
pip install cmcrameri
First, run the PV optimization with RUN-PV-optimization.py
with desired parameters.
Once you have the results files, you can run quantitative assessment of PVs with RUN-VarianceData.py
.
Both those scripts load the appropriate data under the hood using ammonia-Stagni-load-data.py
.
You have a lot of flexibility in setting different ANN hyper-parameters in those two scripts using the argparse
Python library.
If you're new to argparse
, check out my short video tutorials:
- Master script for running PV optimization
RUN-PV-optimization.py
The above script uses one of the following under the hood:
- QoI-aware encoder-decoder for the
$(f, PV)$ optimizationQoI-aware-ED-f-PV.py
- QoI-aware encoder-decoder for the
$(f, PV, \gamma)$ optimizationQoI-aware-ED-f-PV-h.py
depending on which --parameterization
you selected.
- Master script for running PV optimization
RUN-VarianceData.py
The above script uses one of the following under the hood:
- Assessment of
$(f, PV)$ parameterizationsVarianceData-f-PV.py
- Assessment of
$(f, PV, \gamma)$ parameterizationsVarianceData-f-PV-h.py
depending on which --parameterization
you selected.
This is a minimal example for running a Python script with all hyper-parameters set as per §2.2 in the paper:
python RUN-PV-optimization.py --parameterization 'f-PV' --data_type 'SLF' --data_tag 'NH3-H2-air-25perc' --random_seeds_tuple 0 20 --target_variables_indices 0 1 3 5 6 9
Alternatively, you can change various parameters (kernel initializer, learning rate, etc.) using the appropriate argument:
python RUN-PV-optimization.py --initializer 'GlorotUniform' --init_lr 0.001 --parameterization 'f-PV' --data_type 'SLF' --data_tag 'NH3-H2-air-25perc' --random_seeds_tuple 0 20 --target_variables_indices 0 1 3 5 6 9
If you'd like to remove pure stream components from the PV definition (non-trainable pure streams preprocessing as discussed in §3.4. in the paper) use the flag:
--no-pure_streams
as an extra argument.
To run
--parameterization 'f-PV'
To run
--parameterization 'f-PV-h'
Note: Logging with Weights & Biases is also possible in the scripts above.
All results are post-processed and visualized in dedicated Jupyter notebooks. You can access the appropriate notebook below:
→ This Jupyter notebook can be used to reproduce Figs. 2-3.
→ This Jupyter notebook can be used to reproduce Fig. 4 and Fig. 10.
→ This Jupyter notebook can be used to reproduce supplementary Figs. S37-S38.