This repository contains all code and data (included data for figures) for the following paper:
Hamilton, A.L., Characklis, G.W., & Reed, P.M. (2021). From Stream Flows to Cash Flows: Leveraging Evolutionary Multi-Objective Direct Policy Search to Manage Hydrologic Financial Risks (in review).
Licensed under the GNU General Public License v3.0. This is a fork of hamilton-2020-managing-financial-risk-tradeoffs-for-hydropower, the code and data repository for "Managing financial risk tradeoffs for hydropower generation using snowpack-based index contracts" (2020), a previous publication by the same authors in Water Resources Research. The current repository and associated manuscript build off of the previous work by introducing a dynamic risk management formulation, Evolutionary Direct Policy Search (EMODPS), as well as a framework for combining information theoretic sensitivity analysis and visualization. In building the EMODPS component of this code base, I borrowed and built upon sections of Julianne Quinn's Lake Problem Direct Policy Search code.
code/
- directory with all code used to replicate papersynthetic_data_and_moea_plots/
- Python and bash scripts needed for (1) Generating all synthetic time series, (2) Plotting output from multi-objective optimization (MOO), (3) Running and plotting entropic sensitivity analysis (ESA).optimization/
- C++ and bash scripts needed for MOOmisc/
- directory for storing third-party softwareHypervolumeEval.class
- Class for calculating hypervolume with MOEAFramework, written by Dave Hadka, created following instructions hereboostutil.h
- Utility functions for boost matrices/vectors, taken from Lake Problem DPS by Julianne Quinn.
data/
- directory with all datadownloaded_inputs/
- original data (see Hamilton et al., 2020, for sourcesice_electric-*final.xls
,NP15Hub.xls
- Historical electricity price data at NP15 hub in northern CaliforniaSeriesReport-20190311141838_d27dd7.xlsx
- Historical consumer price index dataSFPUC_Combined_Public.xlsx
- Historical hydropower generation and sales for SFPUCSFPUC_genMonthly.csv
- Historical hydropower generation for SFPUC (after manually aggregating across sources to monthly time step)swe_dana_meadows.csv
- Historical snow water equivalent depth at Dana Meadows snow station
generated_inputs/
- data generated by userparam_LHC_sample_withLamPremShift.txt
- financial parameter file taken from Hamilton et al., 2020, repository- Other files created by model itself, as described below
optimization_output/
- outputs from MOO needed for furthur analysispolicy_simulation/
- outputs from simulating example policies and from ESA
figures/
- directory for storing figures
- Clone the model from this GitHub repository
- Install Python dependencies in virtual environment. All synthetic data generation, data analysis, and figure production are set up to run on my Windows laptop, using a linux bash shell (WSL, Ubuntu 18.04 LTS), and Python 3.6.9. You will need to install all packages listed at the top of the Python files in
code/synthetic_data_and_moea_plots
. - The MOO and ESA are set up to run on THECUBE, a cluster housed at Cornell University. THECUBE uses the slurm scheduler. Submission scripts and makefiles may need to be altered to accomodate different setups.
- Obtain additional software
- Download the Borg MOEA source code
- Create a new directory called
code/misc/borg/
and place the source code here.
- Create a new directory called
- Download the "Compiled Binaries" from the MOEAFramework website.
- Copy the
moeaframework.c
&moeaframework.h
files (from theMOEAFramework-*/examples
directory of the package) tocode/misc/borg
- Copy the
- Download the "Demo Application" from the MOEAFramework website.
- Move
MOEAFramework-*-Demo.jar
tocode/misc
- Move
- Download
pareto.py
from Github- Move to
code/misc
- Move to
- Download the Borg MOEA source code
- Run
make_synthetic_data_plots.py
, fromcode/synthetic_data_and_moea_plots/
directory, either in an IDE or in a bash shell.- Outputs
data/generated_inputs/synthetic_data.txt
- Synthetic time series of hydropower revenue, and CFD net payout, and power price index. Needed for MOO.data/generated_inputs/example_data.txt
- 3x20 year samples from synthetic record, one very wet, one average, one very dry. Each sample reports SWE index, CFD net payout, hydropower generation, weighted average power price, power price index, and hydropower revenue, at an annual time scale.- Figures of power price index correlation (Fig S2 from Supporting Information) and hedging contract structure (Figure S3 from Supporting Information), in
figures
directory
- Outputs
- Note: If you don't want to repeat the MOO, you can skip to the next section and analyze my MOO output stored in
data/optimization_output
- Transfer new files from
data/generated_inputs/
to cluster - Navigate to
code/optimization/cluster_run
and run the following commands in order. You will need to wait for each set of runs to finish before proceeding to the next step. All outputs from these steps are stored indata/optimization_output
.sh run_rbf_experiment.sh
- This will run an experiment to see how many radial basis functions (RBFs) should be used for this problem, using the 4-objective dynamic formulation. The script will create new directories for (1, 2, 3, 4, 8, 12) RBFs, change the relevant parameters, recompile, and dispatch the Borg MOEA using 10 random seeds each on the slurm scheduler.sh run_rbf_postprocess.sh
- This will postprocess the RBF experiment to find reference sets and runtime metrics and re-evaluate on a new synthetic dataset.sh run_more_seeds.sh
- Using the results from the RBF experiment (as analyzed in the "Create Figures" section below), we find that 2 RBFs is a good choice. This step will run 20 more random seeds with 2 RBFs.sh run_more_seeds_postprocess.sh
- Combine the original 10 seeds into the new directory and postprocess.sh run_formulation_experiment.sh
- Run 3 more formulations, for 30 seeds each. (1) 2-objective dynamic, (2) 2-objective static, (3) 4-objective static.sh run_formulation_postprocess.sh
- Postprocess results from alternative formulationssh run_refSets_subproblem.sh
- Get subsets of 4-objective dynamic reference set, non-dominated with respect to alternative lower-dimensional problems
- Note: If you don't want to repeat the ESA, you can skip to the next section and analyze my ESA output stored in
data/policy_simulation
- Transfer new files from
data/generated_inputs/
to cluster - Navigate to
code/synthetic_data_and_moea_plots
(still on the cluster) and run the following commands in order. You will need to wait for each set of runs to finish before proceeding to the next step. All outputs from these steps are stored indata/policy_simulation
.sh run_SA_cube.sh
- This script will run ESA for each policy in the 4-objective dynamic and 2-objective dynamic solution sets, and store the result as a separate file for each.python consolidate_SA_output.py
- Consolidate the ESA output from all policies into a single csv file.
- Transfer important MOO & ESA outputs to appropriate directories on laptop for plotting (skip this step if performing the all analysis on a single machine)
data/optimization_output/4obj_rbf_overall/DPS_4obj_rbf_overall_borg.hypervolume
data/optimization_output/4obj_*rbf/metrics/DPS_param150_seedS1_seedB@.metrics
, for * in (1, 2, 3, 4, 8, 12), @ in 1-10data/optimization_output/*obj_2dv/DPS_*obj_2dv_borg_retest.resultfile
, for * in (2, 4)data/optimization_output/2obj_2rbf/DPS_2obj_2rbf_borg_retest.resultfile
data/optimization_output/4obj_2rbf_moreSeeds/DPS_4obj_2rbf_moreSeeds_borg_retest.resultfile
data/optimization_output/4obj_2rbf_moreSeeds/DPS_4obj_2rbf_*.linefile
, for * in (12, 13, 14, 23, 24, 34, 123, 124, 134, 234)data/policy_simulation/*obj/mi_combined.csv
, for * in (2, 4)
- Create plots/tables related to MOO results.
- Navigate to
code/synthetic_data_and_moea_plots
and run the following commands in order. All figures/tables will be saved tofigures
directory. python make_moea_output_plots.py
- Creates Figures 4-8 in main text and Figures S4-S5 in the Supporting Information. For convenience, the output format is "jpg", but this can be changed with the "fig_format" variable in the script. For the paper, I used "eps" format and combined/cleaned up figures in Adobe Illustrator.python make_mutual_info_plots.py
- Creates Table 2 and Figure 9 in main text and Figure S6 in the Supporting Information. For convenience, the output format is "jpg", but this can be changed with the "fig_format" variable in the script. For the paper, I used "eps" format and combined/cleaned up figures in Adobe Illustrator.- Run
policy_parallel_coord.R
in R. Creates Figure 10 in main text and Figure S7 in the Supporting Information. Defaults to "eps" format.
- Navigate to