diff --git a/doc/sphinx/pod_summary.rst b/doc/sphinx/pod_summary.rst deleted file mode 100644 index 1c6657393..000000000 --- a/doc/sphinx/pod_summary.rst +++ /dev/null @@ -1,185 +0,0 @@ -Summary of MDTF process-oriented diagnostics -============================================ - -The MDTF diagnostics package is a portable framework for running process-oriented diagnostics (PODs) on climate model data. Each POD script targets a specific physical process or emergent behavior, with the goals of determining how accurately the model represents that process, ensuring that models produce the right answers for the right reasons, and identifying gaps in the understanding of phenomena. - -The scientific motivation and content behind the framework was described in E. D. Maloney et al. (2019): Process-Oriented Evaluation of Climate and Weather Forecasting Models. *BAMS*, **100** (9), 1665–1686, `doi:10.1175/BAMS-D-18-0042.1 `__. - -Convective Transition Diagnostics ---------------------------------- - -*J. David Neelin (UCLA)* -neelin@atmos.ucla.edu - -This POD computes statistics that relate precipitation to measures of tropospheric temperature and moisture, as an evaluation of the interaction of parameterized convective processes with the large-scale environment. Here the basic statistics include the conditional average and probability of precipitation, PDF of column water vapor (CWV) for all events and precipitating events, evaluated over tropical oceans. The critical values at which the conditionally averaged precipitation sharply increases as CWV exceeds the critical threshold are also computed (provided the model exhibits such an increase). - -================== ================== -Variables Frequency -================== ================== -Precipitation rate 6-hourly or higher -Column water vapor 6-hourly or higher -================== ================== - -References: - -- Kuo, Y.-H., K. A. Schiro, and J. D. Neelin (2018): Convective transition statistics - over tropical oceans for climate model diagnostics: Observational baseline. *J. Atmos. Sci.*, **75**, 1553-1570, https://doi.org/10.1175/JAS-D-17-0287.1. - - -Extratropical Variance (EOF 500hPa Height) ------------------------------------------- - -*CESM/AMWG (NCAR)* -bundy@ucar.edu - -This POD computes the climatological anomalies of 500 hPa geopotential height and calculates the EOFs over the North Atlantic and the North Pacific. - -=================== ================== -Variables Frequency -=================== ================== -Surface pressure Monthly -Geopotential hegiht Monthly -=================== ================== - - -MJO Propagation and Amplitude ------------------------------ - -*Xianan Jiang (UCLA)* -xianan@ucla.ecu - -This POD calculates the model skill scores of MJO eastward propagation versus winter mean low-level moisture pattern over Indo-Pacific, and compares the simulated amplitude of MJO over the Indian Ocean versus moisture convective adjustment time-scale. - -================== ================== -Variables Frequency -================== ================== -Precipitation rate Daily or higher -Specific humidity Daily or higher -================== ================== - -References: - -- Jiang, X. (2017): Key processes for the eastward propagation of the Madden‐Julian - Oscillation based on multimodel simulations, *JGR‐Atmos*, **122**, 755–770, https://doi.org/10.1002/2016JD025955. - -- Gonzalez, A. O., and X. Jiang (2017): Winter mean lower tropospheric moisture over - the Maritime Continent as a climate model diagnostic metric for the propagation of the Madden‐Julian oscillation, *Geophys. Res. Lett.*, **44**, 2588–2596, https://doi.org/10.1002/2016GL072430. - -- Jiang, X., M. Zhao, E. D. Maloney, and D. E. Waliser, (2016): Convective moisture - adjustment time scale as a key factor in regulating model amplitude of the Madden‐Julian Oscillation. *Geophys. Res. Lett.*, **43**, 10412‐10419, https://doi.org/10.1002/2016GL070898. - - -MJO Spectra and Phasing ------------------------ - -*CESM/AMWG (NCAR)* -bundy@ucar.edu - -This PDO computes many of the diagnostics described by the WGNE MJO Task Force and developed by Dennis Shea for observational data. Using daily precipitation, outgoing longwave radiation, zonal wind at 850 and 200 hPa and meridional wind at 200 hPa, the module computes anomalies, bandpass-filters for the 20-100 day period, calculates the MJO Index as defined as the running variance over the bandpass filtered data, performs an EOF analysis, and calculates lag cross-correlations, wave-number frequency spectra and composite life cycles of MJO events. - -================== ================== -Variables Frequency -================== ================== -Precipitation rate Daily -OLR Daily -U850 Daily -U200 Daily -V200 Daily -================== ================== - -References: - -- Waliser et al. (2009): MJO simulation diagnostics. *J. Climate*, **22**, 3006–3030, - https://doi.org/10.1175/2008JCLI2731.1. - - -MJO Teleconnections -------------------- - -*Eric Maloney (CSU)* -eric.maloney@colosate.edu - -The POD first compares MJO phase (1-8) composites of anomalous 250 hPa geopotential height and precipitation with observations (ERA-Interim/GPCP) and several CMIP5 models (BCC-CSM1.1, CNRM-CM5, GFDL-CM3, MIROC5, MRI-CGCM3, and NorESM1-M). Then, average teleconnection performance across all MJO phases defined using a pattern correlation of geopotential height anomalies is assessed relative to MJO simulation skill and biases in the North Pacific jet zonal winds to determine reasons for possible poor teleconnections. Performance of the candidate model is assessed relative to a cloud of observations and CMIP5 simulations. - -================== ================== -Variables Frequency -================== ================== -Precipitation rate Daily -OLR Daily -U850 Daily -U250 Daily -Z250 Daily -================== ================== - -References: - -- Henderson, S. A., Maloney, E. D., & Son, S. W. (2017): Madden–Julian oscillation - Pacific teleconnections: The impact of the basic state and MJO representation in general circulation models. *Journal of Climate*, **30** (12), 4567-4587 https://doi.org/10.1175/JCLI-D-16-0789.1. - - -Diurnal Cycle of Precipitation ------------------------------- - -*Rich Neale (NCAR)* -bundy@ucar.edu - -The POD generates a simple representation of the phase (in local time) and amplitude (in mm/day) of total precipitation, comparing a lat-lon model output of total precipitation with observed precipitation derived from the Tropical Rainfall Measuring Mission. - -================== ================== -Variables Frequency -================== ================== -Precipitation rate 3-hourly or higher -================== ================== - -References: - -- Gervais, M., J. R. Gyakum, E. Atallah, L. B. Tremblay, and R. B. Neale (2014): How - Well Are the Distribution and Extreme Values of Daily Precipitation over North America Represented in the Community Climate System Model? A Comparison to Reanalysis, Satellite, and Gridded Station Data. *Journal of Climate*, **27**, 5219–5239, https://doi.org/10.1175/JCLI-D-13-00320.1. - -- Gettelman, A., P. Callaghan, V. E. Larson, C. M. Zarzycki, J. T. Bacmeister, P. H. - Lauritzen, P. A. Bogenschutz, and R. B. Neale, (2018): Regional Climate Simulations With the Community Earth System Model. *Journal of Advances in Modeling Earth Systems*, **10**, 1245–1265, https://doi.org/10.1002/2017MS001227. - - -Coupling between Soil Moisture and Evapotranspiration ------------------------------------------------------ -*Alexis M. Berg (Princeton)* -ab5@princeton.edu - -This POD evaluates the relationship between soil moisture and evapotranspiration. It computes the correlation between surface (0~10 cm) soil moisture and evapotranspiration during summertime. It then associates the coupling strength with the simulated precipitation. - -================== ================== -Variables Frequency -================== ================== -Soil moisture Monthly -Evapotranspiration Monthly -Precipitation rate Monthly -================== ================== - -References: - -- Berg, A and J. Sheffield. (2018): Soil Moisture–Evapotranspiration Coupling in - CMIP5 Models: Relationship with Simulated Climate and Projections, *J. Climate*, **31** (12), 4865-4878, https://doi.org/10.1175/JCLI-D-17-0757.1. - - -Wavenumber-Frequency Spectra ----------------------------- -*CESM/AMWG (NCAR)* -bundy@ucar.edu - -This POD performs wavenumber frequency spectra analysis (Wheeler and Kiladis) on OLR, Precipitation, 500hPa Omega, 200hPa wind and 850hPa wind. - -================== ================== -Variables Frequency -================== ================== -Precipitation rate Daily -OLR Daily -U850 Daily -U200 Daily -W250 Daily -================== ================== - -References: - -- Wheeler, M. and G. N. Kiladis (1999): Convectively Coupled Equatorial Waves: Analysis - of Clouds and Temperature in the Wavenumber–Frequency Domain. *J. Atmos. Sci.*, **56**, 3, 374–99. `https://doi.org/10.1175/1520-0469(1999)056<0374:CCEWAO>2.0.CO;2 2.0.CO;2>`__. - diff --git a/doc/sphinx/ref_cli.rst b/doc/sphinx/ref_cli.rst index 1e5347f83..7277f1e63 100644 --- a/doc/sphinx/ref_cli.rst +++ b/doc/sphinx/ref_cli.rst @@ -8,9 +8,16 @@ Command-line options Running the package ------------------- -If you followed the :ref:`recommended installation method` for installing the framework with the `conda `__ package manager, the installation process will have created a driver script named ``mdtf`` in the top level of the code directory. This script should always be used as the entry point for running the package. +If you followed the :ref:`recommended installation method` for installing the framework +the `conda `__ package manager, the installation process will have created +a driver script named ``mdtf`` in the top level of the code directory. +This script should always be used as the entry point for running the package. -This script is minimal and shouldn't conflict with customized shell environments: it only sets the conda environment for the framework and calls `mdtf_framework.py `__, the python script which should be used as the entry point if a different installation method was used. In all cases the command-line options are as described here. +This script is minimal and shouldn't conflict with customized shell environments: +it only sets the conda environment for the framework and calls +`mdtf_framework.py `__, +the python script which should be used as the entry point if a different installation method was used. In all cases +the command-line options are as described here. Usage ----- @@ -20,9 +27,13 @@ Usage mdtf [options] [CASE_ROOT_DIR] mdtf info [TOPIC] -The first form of the command runs the package's diagnostics on model data files in the directory ``CASE_ROOT_DIR``. The options, described below, can be set on the command line or in an input file specified with the ``-f``/``--input-file`` flag. An example of such an input file is provided at `src/default_tests.jsonc `__. +The first form of the command runs the package's diagnostics on model data files in the directory ``CASE_ROOT_DIR``. +The options, described below, can be set on the command line or in an input file specified with the +``-f``/``--input-file`` flag. An example of such an input file is provided at +`src/default_tests.jsonc `__. -The second form of the command prints information about the installed diagnostics. To get a list of topics recognized by the command, run :console:`% mdtf info`. +The second form of the command prints information about the installed diagnostics. +To get a list of topics recognized by the command, run :console:`% mdtf info`. .. _ref-cli-options: @@ -30,9 +41,14 @@ The second form of the command prints information about the installed diagnostic Command-line options -------------------- -For long command line flags, words may be separated with hyphens (GNU standard) or with underscores (python variable name convention). For example, ``--file-transfer-timeout`` and ``--file_transfer_timeout`` are both recognized by the package as synonyms for the same setting. +For long command line flags, words may be separated with hyphens (GNU standard) or with underscores +(python variable name convention). For example, ``--file-transfer-timeout`` and ``--file_transfer_timeout`` +are both recognized by the package as synonyms for the same setting. -If you're using site-specific functionality (via the ``--site`` flag, described below), additional options may be available beyond what is listed here: see the :doc:`site-specific documentation` for your site. In addition, your choice of site may set default values for these options; the default values and the location of the configuration file defining them are listed as part of running :console:`% mdtf --site --help`. +If you're using site-specific functionality (via the ``--site`` flag, described below), +additional options may be available beyond what is listed here: see the :doc:`site-specific documentation` +for your site. In addition, your choice of site may set default values for these options; the default values and the +location of the configuration file defining them are listed as part of running :console:`% mdtf --site --help`. General options +++++++++++++++ diff --git a/doc/sphinx/ref_data_sources.rst b/doc/sphinx/ref_data_sources.rst index 5be465503..11091cc9d 100644 --- a/doc/sphinx/ref_data_sources.rst +++ b/doc/sphinx/ref_data_sources.rst @@ -3,7 +3,9 @@ Model data sources ================== -This section details how to select the input model data for the package to analyze. The command-line option for this functionality is the ``--data-manager`` flag, which selects a "data source": a code plug-in that implements all functionality needed to obtain model data needed by the PODs, based on user input: +This section details how to select the input model data for the package to analyze. +The command-line option for this functionality is the ``--data-manager`` flag, which selects a "data source": +a code plug-in that implements all functionality needed to obtain model data needed by the PODs, based on user input: * An interface to query the remote store of data for the variables requested by the PODs, whether in the form of a file naming convention or an organized data catalog/database; * (Optional) heuristics for refining the query results in order to guarantee that all data selected came from the same model run; @@ -11,9 +13,17 @@ This section details how to select the input model data for the package to analy Each data source may define its own specific command-line options, which are documented here. -The choice of data source determines where and how the data needed by the diagnostics is obtained, but doesn't specify anything about the data's contents. For that purpose we allow the user to specify a "variable naming :ref:`convention`" with the ``--convention`` flag. Also consult the :doc:`requirements` that input model data must satisfy in terms of file formats. +The choice of data source determines where and how the data needed by the diagnostics is obtained, +but doesn't specify anything about the data's contents. For that purpose we allow the user to specify a +"variable naming :ref:`convention`" with the ``--convention`` flag. +Also consult the :doc:`requirements` that input model data must satisfy in terms of file formats. -There are currently three data sources implemented in the package, described below. If you're using site-specific functionality (via the ``--site`` flag), additional options may be available; see the :doc:`site-specific documentation` for your site. If you would like the package to support obtaining data from a source that hasn't currently been implemented, please make a request in the appropriate GitHub `discussion thread `__. +There are currently three data sources implemented in the package, described below. +If you're using site-specific functionality (via the ``--site`` flag), additional options may be available; +see the :doc:`site-specific documentation` for your site. +If you would like the package to support obtaining data from a source that hasn't currently been implemented, +please make a request in the appropriate GitHub +`discussion thread `__. .. _ref-data-source-localfile: @@ -22,7 +32,10 @@ Sample model data source Selected via ``--data-manager="LocalFile"``. This is the default value for <*data-manager*>. -This data source lets the package run on the sample model data provided with the package and installed by the user at <*MODEL_DATA_ROOT*>. Any additional data added by the user to this location (either by copying files, or through symlinks) will also be recognized, provided that it takes the form of one netCDF file per variable and that it follows the following file and subdirectory naming convention : +This data source lets the package run on the sample model data provided with the package and installed by the user +at <*MODEL_DATA_ROOT*>. Any additional data added by the user to this location +(either by copying files, or through symlinks) will also be recognized, provided that it takes the form of one netCDF +file per variable and that it follows the following file and subdirectory naming convention : <*MODEL_DATA_ROOT*>/<*dataset_name*>/<*frequency*>/<*dataset_name*>.<*variable_name*>.<*frequency*>.nc, diff --git a/doc/sphinx/start_config.rst b/doc/sphinx/start_config.rst index 58bfba28e..70d5747ae 100644 --- a/doc/sphinx/start_config.rst +++ b/doc/sphinx/start_config.rst @@ -5,41 +5,75 @@ Running the package on your data ================================ -In this section we describe how to proceed beyond running the simple test case described in the :doc:`previous section `, in particular how to run the framework on your own model data. +In this section we describe how to proceed beyond running the simple test case described in the +:doc:`previous section `, in particular how to run the framework on your own model data. Preparing your data for use by the package ------------------------------------------ -You have multiple options for organizing or setting up access to your model's data in a way that the framework can recognize. This task is performed by a "data source," a code plug-in that handles obtaining model data from a remote location for analysis by the PODs. +You have multiple options for organizing or setting up access to your model's data in a way that the framework +can recognize. This task is performed by a "data source," a code plug-in that handles obtaining model data from +a remote location for analysis by the PODs. -For a list of the available data sources, what types of model data they provide and how to configure them, see the :doc:`data source reference`. In the rest of this section, we describe the steps required to add your own model data for use with the :ref:`LocalFile` data source, since it's currently the most general-purpose option. +For a list of the available data sources, what types of model data they provide and how to configure them, +see the :doc:`data source reference`. In the rest of this section, +we describe the steps required to add your own model data for use with the +:ref:`LocalFile` data source, since it's currently the most general-purpose option. Selecting and formatting the model data +++++++++++++++++++++++++++++++++++++++ -Consult the :doc:`POD summary` to identify which diagnostics you want to run and what variables are required as input for each. In general, if the data source can't find data that's required by a POD, an error message will be logged in place of that POD's output that should help you diagnose the problem. +Consult the `list of available PODs `__ +to identify which diagnostics you want to run and what variables are required as input for each. In general, +if the data source can't find data that's required by a POD, +an error message will be logged in place of that POD's output that should help you diagnose the problem. -The LocalFile data source works with model data structured with each variable stored in a separate netCDF file. Some additional conditions on the metadata are required: any model output compliant with the `CF conventions `__ is acceptable, but only a small subset of those conventions are required by this data source. See the :doc:`data format reference` for a complete description of what's required. +The LocalFile data source works with model data structured with each variable stored in a separate netCDF file. +Some additional conditions on the metadata are required: any model output compliant with the +`CF conventions `__ is acceptable, but only a small subset of those conventions are required +by this data source. See the :doc:`data format reference` for a complete description of what's required. Naming variables according to a convention ++++++++++++++++++++++++++++++++++++++++++ -The LocalFile data source is intended to deal with output produced by different models, which poses a problem because different models use different variable names for the same physical quantity. For example, in NCAR's `CESM2 `__ the name for total precipitation is ``PRECT``, while the name for the same quantity in GFDL's `CM4 `__ is ``precip``. - -In order to identify what variable names correspond to the physical quantities requested by each POD, the LocalFile data source requires that model data follow one of several recognized variable naming conventions defined by the package. The currently recognized conventions are: - -* ``CMIP``: Variable names and units as used in the `CMIP6 `__ `data request `__. There is a `web interface `__ to the request. Data from any model that has been published as part of CMIP6 (e.g., made available via `ESGF `__) should follow this convention. - -* ``NCAR``: Variable names and units used in the default output of models developed at the `National Center for Atmospheric Research `__, such as `CAM `__ (all versions) and `CESM2 `__. - -* ``GFDL``: Variable names and units used in the default output of models developed at the `Geophysical Fluid Dynamics Laboratory `__, such as `AM4 `__, `CM4 `__ and `SPEAR `__. - -The names and units for the variables in the model data you're adding need to conform to one of the above conventions in order to be recognized by the LocalFile data source. For models that aren't currently supported, the workaround we recommend is to generate ``CMIP``-compliant data by postprocessing model output with the `CMOR `__ tool. We hope to offer support for the naming conventions of a wider range of models in the future. +The LocalFile data source is intended to deal with output produced by different models, +which poses a problem because different models use different variable names for the same physical quantity. +For example, in NCAR's `CESM2 `__ the name for total precipitation is +``PRECT``, while the name for the same quantity in GFDL's +`CM4 `__ is ``precip``. + +In order to identify what variable names correspond to the physical quantities requested by each POD, the LocalFile +data source requires that model data follow one of several recognized variable naming conventions defined by +the package. The currently recognized conventions are: + +* ``CMIP``: Variable names and units as used in the +`CMIP6 `__ `data request `__. +There is a `web interface `__ to the request. +Data from any model that has been published as part of CMIP6 +(e.g., made available via `ESGF `__) should follow this convention. + +* ``NCAR``: Variable names and units used in the default output of models developed at the +`National Center for Atmospheric Research `__, such as +`CAM `__ (all versions) and +`CESM2 `__. + +* ``GFDL``: Variable names and units used in the default output of models developed at the +`Geophysical Fluid Dynamics Laboratory `__, such as +`AM4 `__, `CM4 `__ and +`SPEAR `__. + +The names and units for the variables in the model data you're adding need to conform to one of the above conventions +in order to be recognized by the LocalFile data source. For models that aren't currently supported, the workaround we +recommend is to generate ``CMIP``-compliant data by postprocessing model output with the +`CMOR `__ tool. +We hope to offer support for the naming conventions of a wider range of models in the future. Adding your model data files ++++++++++++++++++++++++++++ -The LocalFile data source reads files from a local directory that follow the filename convention used for the sample model data. Specifically, the files should be placed in a subdirectory in <*MODEL_DATA_ROOT*> and named following the pattern +The LocalFile data source reads files from a local directory that follow the filename convention used +for the sample model data. Specifically, the files should be placed in a subdirectory in <*MODEL_DATA_ROOT*> +and named following the pattern <*MODEL_DATA_ROOT*>/<*dataset_name*>/<*frequency*>/<*dataset_name*>.<*variable_name*>.<*frequency*>.nc, @@ -112,12 +146,20 @@ Options controlling the analysis The configuration options required to specify what analysis the package should do are: * ``--CASENAME`` <*name*>: Identifier used to label this run of the package. Can be set to any string. -* ``--experiment`` <*dataset_name*>: The name (subdirectory) you assigned to the data files in the previous section. If this option isn't given, its value is set from <*CASENAME*>. -* ``--convention`` <*convention name*>: The naming convention used to assign the <*variable_name*>s, from the previous section. +* ``--experiment`` <*dataset_name*>: The name (subdirectory) you assigned to the data files in the previous section. +If this option isn't given, its value is set from <*CASENAME*>. +* ``--convention`` <*convention name*>: The naming convention used to assign the <*variable_name*>s, +from the previous section. * ``--FIRSTYR`` <*YYYY*>: The starting year of the analysis period. -* ``--LASTYR`` <*YYYY*>: The end year of the analysis period. The analysis period includes all data that falls between the start of 1 Jan on <*FIRSTYR*> and the end of 31 Dec on <*LASTYR*>. An error will be raised if the data provided for any requested variable doesn't span this date range. +* ``--LASTYR`` <*YYYY*>: The end year of the analysis period. The analysis period includes all data that falls +between the start of 1 Jan on <*FIRSTYR*> and the end of 31 Dec on <*LASTYR*>. +An error will be raised if the data provided for any requested variable doesn't span this date range. -If specifying these in a configuration file, these options should given as entry in a list titled ``case_list`` (following the example in `src/default_tests.jsonc `__). Using the package to compare the results of a list of different experiments is a major feature planned for an upcoming release. +If specifying these in a configuration file, these options should given as entry in a list titled ``case_list`` +(following the example in +`src/default_tests.jsonc `__). +Using the package to compare the results of a list of different experiments is a major feature planned for an upcoming +release. You will also need to specify the list of diagnostics to run. This can be given as a list of POD names (as given in the `diagnostics/ `__ directory), or ``all`` to run all PODs. This list can be given by the ``--pods`` command-line flag, or by a ``pod_list`` attribute in the ``case_list`` entry. @@ -126,9 +168,12 @@ Other options Some of the most relevant options which control the package's output are: -* ``--save-ps``: Set this flag to have PODs save copies of all plots as postscript files (vector graphics) in addition to the bitmaps used in the HTML output pages. -* ``--save-nc``: Set this flag to have PODs retain netCDF files of any intermediate calculations, which may be useful if you want to do further analyses with your own tools. -* ``--make-variab-tar``: Set this flag to save the collection of files (HTML pages and bitmap graphics) output by the package as a single .tar file, which can be useful for archival purposes. +* ``--save-ps``: Set this flag to have PODs save copies of all plots as postscript files (vector graphics) +in addition to the bitmaps used in the HTML output pages. +* ``--save-nc``: Set this flag to have PODs retain netCDF files of any intermediate calculations, +which may be useful if you want to do further analyses with your own tools. +* ``--make-variab-tar``: Set this flag to save the collection of files (HTML pages and bitmap graphics) +output by the package as a single .tar file, which can be useful for archival purposes. The full list of configuration options is given at :doc:`ref_cli`. @@ -144,9 +189,14 @@ From this point, the instructions for running the package are the same as for :r The first few lines of console output will echo the values you've provided for <*CASENAME*>, etc., as confirmation. -The output of the package will be saved as a series of web pages in a directory named MDTF\_<*CASENAME*>\_<*FIRSTYR*>\_<*LASTYR*> within <*OUTPUT_DIR*>. If you run the package multiple times with the same configuration values, it's not necessary to change the <*CASENAME*>: by default, the suffixes ".v1", ".v2", etc. will be added to duplicate output directory names so that results aren't accidentally overwritten. +The output of the package will be saved as a series of web pages in a directory named +MDTF\_<*CASENAME*>\_<*FIRSTYR*>\_<*LASTYR*> within <*OUTPUT_DIR*>. +If you run the package multiple times with the same configuration values, +it's not necessary to change the <*CASENAME*>: by default, the suffixes ".v1", ".v2", etc. will be added to duplicate +output directory names so that results aren't accidentally overwritten. -The results of the diagnostics are presented as a series of web pages, with the top-level page named index.html. To view the results in a web browser, run (e.g.,) +The results of the diagnostics are presented as a series of web pages, with the top-level page named index.html. +To view the results in a web browser, run (e.g.,) .. code-block:: console diff --git a/doc/sphinx/start_install.rst b/doc/sphinx/start_install.rst index 41510933b..fa3d071ab 100644 --- a/doc/sphinx/start_install.rst +++ b/doc/sphinx/start_install.rst @@ -5,26 +5,37 @@ Installation instructions ========================= -This section provides basic directions for downloading, installing and running a test of the Model Diagnostics Task Force (MDTF) package using sample model data. The package has been tested on Linux, Mac OS, and the Windows Subsystem for Linux. +This section provides basic directions for downloading, installing and running a test of the +Model Diagnostics Task Force (MDTF) package using sample model data. The package has been tested on Linux, +Mac OS, and the Windows Subsystem for Linux. -You will need to download the source code, digested observational data, and sample model data (:numref:`ref-download`). Afterwards, we describe how to install software dependencies using the `conda `__ package manager (:numref:`ref-conda-install`) and run the framework on sample model data (:numref:`ref-configure` and :numref:`ref-execute`). +You will need to download the source code, digested observational data, and sample model data (:numref:`ref-download`). +Afterwards, we describe how to install software dependencies using the `conda `__ +package manager (:numref:`ref-conda-install`) and run the framework on sample model data (:numref:`ref-configure` and +:numref:`ref-execute`). -Throughout this document, :console:`%` indicates the shell prompt and is followed by commands to be executed in a terminal in ``fixed-width font``. Variable values are denoted by angle brackets, e.g. <*HOME*> is the path to your home directory returned by running :console:`% echo $HOME`. +Throughout this document, :console:`%` indicates the shell prompt and is followed by commands to be executed in a +terminal in ``fixed-width font``. Variable values are denoted by angle brackets, e.g. <*HOME*> is the path to your +home directory returned by running :console:`% echo $HOME`. .. _ref-download: Obtaining the code ------------------ -The official repo for the package's code is hosted at the NOAA-GFDL `GitHub account `__. To simplify updating the code, we recommend that all users obtain the code using git. For more in-depth instructions on how to use git, see :doc:`dev_git_intro`. +The official repo for the package's code is hosted at the NOAA-GFDL +`GitHub account `__. +To simplify updating the code, we recommend that all users obtain the code using git. +For more in-depth instructions on how to use git, see :doc:`dev_git_intro`. -To install the MDTF package on a local machine, open a terminal and create a directory named `mdtf`. Instructions for end-users and new developers are then as follows: +To install the MDTF package on a local machine, open a terminal and create a directory named `mdtf`. +Instructions for end-users and new developers are then as follows: - For end users: 1. | :console:`% cd mdtf`, then clone your fork of the MDTF repo on your machine: | :console:`% git clone https://github.com//MDTF-diagnostics`. - 2. Verify that you are on the main branch: :console:`% git branch`. This is the default, but it never hurts to get in the habit of running git branch before you start working. + 2. Verify that you are on the main branch: :console:`% git branch`. 3. | Check out the `latest official release `__: | :console:`% git checkout tags/v3.0`. 4. Proceed with the installation process described below. @@ -42,13 +53,14 @@ To install the MDTF package on a local machine, open a terminal and create a dir 2. Check out the ``main`` branch: :console:`% git checkout main`. 3. Proceed with the installation process described below. 4. | Check out a new branch for your POD: - | :console:`% git checkout -b feature/`. + | :console:`% git checkout -b `. 5. | Edit existing files/create new files, then commit the changes: | :console:`% git commit -m \"description of your changes\"`. 6. | Push the changes on your branch to your remote fork: - | :console:`% git push -u origin feature/`. + | :console:`% git push -u origin `. -The path to the code directory (``.../mdtf/MDTF-diagnostics``) is referred to as <*CODE_ROOT*> in what follows. It contains the following subdirectories: +The path to the code directory (``.../mdtf/MDTF-diagnostics``) is referred to as <*CODE_ROOT*>. +It contains the following subdirectories: - ``diagnostics/``: directory containing source code and documentation of individual PODs. - ``doc/``: source code for the documentation website. @@ -57,7 +69,9 @@ The path to the code directory (``.../mdtf/MDTF-diagnostics``) is referred to as - ``src/``: source code of the framework itself. - ``tests/``: general tests for the framework. -For advanced users interested in keeping more up-to-date on project development and contributing feedback, the ``main`` branch of the GitHub repo contains features that haven’t yet been incorporated into an official release, which are less stable or thoroughly tested. +For advanced users interested in keeping more up-to-date on project development and contributing feedback, +the ``main`` branch of the GitHub repo contains features that haven’t yet been incorporated into an official release, +which are less stable or thoroughly tested. .. _ref-supporting-data: @@ -70,7 +84,10 @@ Supporting observational data and sample model data are available via anonymous - NCAR-CESM-CAM sample data (12.3 Gb): `model.QBOi.EXP1.AMIP.001.tar `__. - NOAA-GFDL-CM4 sample data (4.8 Gb): `model.GFDL.CM4.c96L32.am4g10r8.tar `__. -The default test case uses the ``QBOi.EXP1.AMIP.001`` sample dataset, and the ``GFDL.CM4.c96L32.am4g10r8`` sample dataset is only for testing the `MJO Propagation and Amplitude POD <../sphinx_pods/MJO_prop_amp.html>`__. Note that the above paths are symlinks to the most recent versions of the data, and will be reported as having a size of zero bytes in an FTP client. +The default test case uses the ``QBOi.EXP1.AMIP.001`` sample dataset, and the ``GFDL.CM4.c96L32.am4g10r8`` sample +dataset is only for testing the `MJO Propagation and Amplitude POD <../sphinx_pods/MJO_prop_amp.html>`__. +Note that the above paths are symlinks to the most recent versions of the data, and will be reported as having +a size of zero bytes in an FTP client. Download these files and extract the contents in the following directory hierarchy under the ``mdtf`` directory: @@ -101,7 +118,8 @@ Download these files and extract the contents in the following directory hierarc Note that ``mdtf`` now contains both the ``MDTF-diagnostics`` and ``inputdata`` directories. -You can put the observational data and model output in different locations, e.g. for space reasons, by changing the paths given in ``OBS_DATA_ROOT`` and ``MODEL_DATA_ROOT`` as described below in :numref:`ref-configure`. +You can put the observational data and model output in different locations, e.g. for space reasons, by changing +the paths given in ``OBS_DATA_ROOT`` and ``MODEL_DATA_ROOT`` as described below in :numref:`ref-configure`. .. _ref-conda-install: @@ -111,45 +129,83 @@ Installing dependencies Installing XQuartz on MacOS ^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you're installing on a MacOS system, you will need to install `XQuartz `__. If the XQuartz executable isn't present in ``/Applications/Utilities``, you will need to download and run the installer from the previous link. +If you're installing on a MacOS system, you will need to install `XQuartz `__. +If the XQuartz executable isn't present in ``/Applications/Utilities``, you will need to download and run the installer +from the previous link. -The reason for this requirement is that the X11 libraries are `required dependencies `__ for the NCL scripting language, even when it's run non-interactively. Because the required libraries cannot be installed through conda (next section), this installation needs to be done as a manual step. +The reason for this requirement is that the X11 libraries are +`required dependencies `__ +for the NCL scripting language, even when it's run non-interactively. +Because the required libraries cannot be installed through conda (next section), +this installation needs to be done as a manual step. Managing dependencies with the conda package manager ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The MDTF framework code is written in Python 3.7, but supports running PODs written in a variety of scripting languages and combinations of libraries. To ensure that the correct versions of these dependencies are installed and available, we use `conda `__, a free, open-source package manager. Conda is one component of the `Miniconda `__ and `Anaconda `__ python distributions, so having Miniconda/Anaconda is sufficient but not necessary. - -For maximum portability and ease of installation, we recommend that all users manage dependencies through conda using the steps below, even if they have independent installations of the required languages. A complete installation of all dependencies will take roughly 5 Gb, less if you've already installed some of the dependencies through conda. The location of this installation can be changed with the ``--conda_root`` and ``--env_dir`` flags described below. Users may install their own copies of Anaconda/Miniconda on their machine, or use a centrally-installed version managed by their institution. Note that installing your own copy of Anaconda/Miniconda will re-define the default locations of the conda executable and environment directory defined in your `.bashrc` or `.cshrc` file if you have previously used a version of conda managed by your institution, so you will have to re-create any environments made using central conda installations. - -If these space requirements are prohibitive, we provide an alternate method of installation which makes no use of conda and instead assumes the user has installed the required external dependencies, at the expense of portability. This is documented in a :doc:`separate section `. +The MDTF framework code is written in Python 3.10, +but supports running PODs written in a variety of scripting languages and combinations of libraries. +To ensure that the correct versions of these dependencies are installed and available, +we use `conda `__, a free, open-source package manager. +Conda is one component of the `Miniconda `__ and +`Anaconda `__ python distributions, so having Miniconda/Anaconda is sufficient but not necessary. + +For maximum portability and ease of installation, we recommend that all users manage dependencies through conda using +the steps below, even if they have independent installations of the required languages. +A complete installation of all dependencies will take roughly 5 Gb, less if you've already installed some of the +dependencies through conda. The location of this installation can be changed with the ``--conda_root`` and ``--env_dir`` +flags described below. + +Users may install their own copies of Anaconda/Miniconda on their machine, or use a +centrally-installed version managed by their institution. Note that installing your own copy of Anaconda/Miniconda +will re-define the default locations of the conda executable and environment directory defined in your `.bashrc` or +`.cshrc` file if you have previously used a version of conda managed by your institution, +so you will have to re-create any environments made using central conda installations. + +If these space requirements are prohibitive, we provide an alternate method of installation which makes +no use of conda and instead assumes the user has installed the required external dependencies, +at the expense of portability. This is documented in a :doc:`separate section `. Installing the conda package manager ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In this section, we install the conda package manager if it's not already present on your system. -- To determine if conda is installed, run :console:`% conda info` as the user who will be using the package. The package has been tested against versions of conda >= 4.7.5. If a pre-existing conda installation is present, continue to the following section to install the package's environments. These environments will co-exist with any existing installation. +- To determine if conda is installed, run :console:`% conda info` as the user who will be using the package. +The package has been tested against versions of conda >= 4.11.0. If a pre-existing conda installation is present, +continue to the following section to install the package's environments. +These environments will co-exist with any existing installation. .. note:: - **Do not** reinstall Miniconda/Anaconda if it's already installed for the user who will be running the package: the installer will break the existing installation (if it's not managed with, e.g., environment modules.) + **Do not** reinstall Miniconda/Anaconda if it's already installed for the user who will be running the package: +the installer will break the existing installation (if it's not managed with, e.g., environment modules.) -- If :console:`% conda info` doesn't return anything, you will need to install conda. We recommend doing so using the Miniconda installer (available `here `__) for the most recent version of python 3, although any version of Miniconda or Anaconda released after June 2019, using python 2 or 3, will work. +- If :console:`% conda info` doesn't return anything, you will need to install conda. +We recommend doing so using the Miniconda installer (available `here `__) for the most recent version of python 3, although any version of Miniconda or Anaconda released after June 2019, using python 2 or 3, will work. -- Follow the conda `installation instructions `__ appropriate to your system. +- Follow the conda `installation instructions `__ +appropriate to your system. -- Toward the end of the installation process, enter “yes” at “Do you wish the installer to initialize Miniconda3 by running conda init?” (or similar) prompt. This will allow the installer to add the conda path to the user's shell login script (e.g., ``~/.bashrc`` or ``~/.cshrc``). It's necessary to modify your login script due to the way conda is implemented. +- Toward the end of the installation process, enter “yes” at “Do you wish the installer to initialize Miniconda3 by +running conda init?” (or similar) prompt. This will allow the installer to add the conda path to the user's shell login +script (e.g., ``~/.bashrc`` or ``~/.cshrc``). It's necessary to modify your login script due to the way conda is +implemented. - Start a new shell to reload the updated shell login script. Installing the package's conda environments ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -In this section we use conda to install the versions of the language interpreters and third-party libraries required by the package's diagnostics. +In this section we use conda to install the versions of the language interpreters and third-party libraries required +by the package's diagnostics. -- First, determine the location of your conda installation by running :console:`% conda info --base` as the user who will be using the package. This path will be referred to as <*CONDA_ROOT*> below. +- First, determine the location of your conda installation by running :console:`% conda info --base` as the user +who will be using the package. This path will be referred to as <*CONDA_ROOT*> below. -- If you don't have write access to <*CONDA_ROOT*> (for example, if conda has been installed for all users of a multi-user system), you will need to tell conda to install its files in a different, writable location. You can also choose to do this out of convenience, e.g. to keep all files and programs used by the MDTF package together in the ``mdtf`` directory for organizational purposes. This location will be referred to as <*CONDA_ENV_DIR*> below. +- If you don't have write access to <*CONDA_ROOT*> +(for example, if conda has been installed for all users of a multi-user system), +you will need to tell conda to install its files in a different, writable location. +You can also choose to do this out of convenience, e.g. to keep all files and programs used by the MDTF package together +in the ``mdtf`` directory for organizational purposes. This location will be referred to as <*CONDA_ENV_DIR*> below. - Install all the package's conda environments by running @@ -166,15 +222,20 @@ In this section we use conda to install the versions of the language interpreter .. note:: - After installing the framework-specific conda environments, you shouldn't alter them manually (i.e., never run ``conda update`` on them). To update the environments after an update to a new release of the framework code, re-run the above commands. + After installing the framework-specific conda environments, you shouldn't alter them manually +(i.e., never run ``conda update`` on them). To update the environments after an update to a new release +of the framework code, re-run the above commands. - These environments can be uninstalled by deleting their corresponding directories under <*CONDA_ENV_DIR*> (or <*CONDA_ROOT*>/envs/). + These environments can be uninstalled by deleting their corresponding directories under <*CONDA_ENV_DIR*> +(or <*CONDA_ROOT*>/envs/). Location of the installed executable ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The script used to install the conda environments in the previous section creates a script named ``mdtf`` in the MDTF-diagnostics directory. This script is the executable you'll use to run the package and its diagnostics. To test the installation, run +The script used to install the conda environments in the previous section creates a script named ``mdtf`` in +the MDTF-diagnostics directory. This script is the executable you'll use to run the package and its diagnostics. +To test the installation, run .. code-block:: console @@ -194,34 +255,63 @@ The output should be Configuring framework paths --------------------------- -In order to run the diagnostics in the package, it needs to be provided with paths to the data and code dependencies installed above. In general, there are two equivalent ways to configure any setting for the package: +In order to run the diagnostics in the package, it needs to be provided with paths to the data and code dependencies +installed above. In general, there are two equivalent ways to configure any setting for the package: + +- All settings are configured with command-line flags. The full documentation for the command line interface is at +:doc:`ref_cli`. + +- Long lists of command-line options are cumbersome, and many of the settings +(such as the paths to data that we set here) don't change between different runs of the package. +For this purpose, any command-line setting can also be provided in an input configuration file. -- All settings are configured with command-line flags. The full documentation for the command line interface is at :doc:`ref_cli`. -- Long lists of command-line options are cumbersome, and many of the settings (such as the paths to data that we set here) don't change between different runs of the package. For this purpose, any command-line setting can also be provided in an input configuration file. -- The two methods of setting options can be freely combined. Any values set explicitly on the command line will override those given in the configuration file. +- The two methods of setting options can be freely combined. Any values set explicitly on the command line will +override those given in the configuration file. -For the remainder of this section, we describe how to edit and use configuration files, since the paths to data, etc., we need to set won't change. +For the remainder of this section, we describe how to edit and use configuration files, +since the paths to data, etc., we need to set won't change. -An example of the configuration file format is provided at `src/default_tests.jsonc `__. This is meant to be a template you can customize according to your purposes: save a copy of the file at <*config_file_path*> and open it in a text editor. The following paths need to be configured before running the framework: +An example of the configuration file format is provided at +`src/default_tests.jsonc `__. +This is meant to be a template you can customize according to your purposes: save a copy of the file at +<*config_file_path*> and open it in a text editor. +The following paths need to be configured before running the framework: -- ``OBS_DATA_ROOT`` should be set to the location of the supporting data that you downloaded in :numref:`ref-supporting-data`. If you used the directory structure described in that section, the default value provided in the configuration file (``../inputdata/obs_data/``) will be correct. If you put the data in a different location, this value should be changed accordingly. Note that relative paths can be used in the configuration file, and are always resolved relative to the location of the MDTF-diagnostics directory (<*CODE_ROOT*>). +- ``OBS_DATA_ROOT`` should be set to the location of the supporting data that you downloaded in +:numref:`ref-supporting-data`. If you used the directory structure described in that section, +the default value provided in the configuration file (``../inputdata/obs_data/``) will be correct. +If you put the data in a different location, this value should be changed accordingly. +Note that relative paths can be used in the configuration file, and are always resolved relative to the location of +the MDTF-diagnostics directory (<*CODE_ROOT*>). -- Likewise, ``MODEL_DATA_ROOT`` should be updated to the location of the NCAR-CESM-CAM sample data (``model.QBOi.EXP1.AMIP.001.tar``)downloaded in :numref:`ref-supporting-data`. This data is required to run the test in the next section. If you used the directory structure described in :numref:`ref-supporting-data`, the default value provided in the configuration file (``../inputdata/model/``) will be correct. +- Likewise, ``MODEL_DATA_ROOT`` should be updated to the location of the NCAR-CESM-CAM sample data +(``model.QBOi.EXP1.AMIP.001.tar``)downloaded in :numref:`ref-supporting-data`. +This data is required to run the test in the next section. If you used the directory structure described +in :numref:`ref-supporting-data`, the default value provided in the configuration file (``../inputdata/model/``) +will be correct. -- ``conda_root`` should be set to the location of your conda installation: the value of <*CONDA_ROOT*> that was used in :numref:`ref-conda-install`. +- ``conda_root`` should be set to the location of your conda installation: the value of <*CONDA_ROOT*> +that was used in :numref:`ref-conda-install`. -- Likewise, if you installed the package's conda environments in a non-default location by using the ``--env_dir`` flag in :numref:`ref-conda-install`, the option ``conda_env_root`` should be set to this path (<*CONDA_ENV_DIR*>). +- Likewise, if you installed the package's conda environments in a non-default location by using the ``--env_dir`` +flag in :numref:`ref-conda-install`, the option ``conda_env_root`` should be set to this path (<*CONDA_ENV_DIR*>). -- Finally, ``OUTPUT_DIR`` should be set to the location you want the output files to be written to (default: ``mdtf/wkdir/``; will be created by the framework). The output of each run of the framework will be saved in a different subdirectory in this location. +- Finally, ``OUTPUT_DIR`` should be set to the location you want the output files to be written to +(default: ``mdtf/wkdir/``; will be created by the framework). +The output of each run of the framework will be saved in a different subdirectory in this location. -In :doc:`start_config`, we describe more of the most important configuration options for the package, and in particular how you can configure the package to run on different data. A complete description of the configuration options is at :doc:`ref_cli`, or can be obtained by running :console:`% ./mdtf --help`. +In :doc:`start_config`, we describe more of the most important configuration options for the package, +and in particular how you can configure the package to run on different data. +A complete description of the configuration options is at :doc:`ref_cli`, or can be obtained by running +:console:`% ./mdtf --help`. .. _ref-execute: Running the package on sample model data ---------------------------------------- -You are now ready to run the package's diagnostics on the sample data from NCAR's CESM-CAM model. We assume you've edited a copy of `src/default_tests.jsonc `__, which is saved at <*config_file_path*>, as described in the previous section. +You are now ready to run the package's diagnostics on the sample data from NCAR's CESM-CAM model. +which is saved at <*config_file_path*> as described in the previous section. .. code-block:: console @@ -252,12 +342,18 @@ Run time may be up to 10-20 minutes, depending on your system. The final lines o All PODs exited cleanly. Output written to /MDTF_QBOi.EXP1.AMIP.001_1977_1981 -This shows that the output of the package has been saved to a directory named ``MDTF_QBOi.EXP1.AMIP.001_1977_1981`` in <*OUTPUT_DIR*>. The results are presented as a series of web pages, with the top-level page named index.html. To view the results in a web browser, run (e.g.,) +This shows that the output of the package has been saved to a directory named ``MDTF_QBOi.EXP1.AMIP.001_1977_1981`` +in <*OUTPUT_DIR*>. The results are presented as a series of web pages, with the top-level page named index.html. +To view the results in a web browser (e.g., Google Chrome, Firefox) run .. code-block:: console % google-chrome /MDTF_QBOi.EXP1.AMIP.001_1977_1981/index.html & -Currently the framework only analyzes data from one model run at a time. To run another test for the the `MJO Propagation and Amplitude POD <../sphinx_pods/MJO_prop_amp.html>`__ on the sample data from GFDL's CM4 model, open the configuration file at <*config_file_path*>, delete or comment out the section for ``QBOi.EXP1.AMIP.001`` in the ``caselist`` section of that file, and uncomment the section for ``GFDL.CM4.c96L32.am4g10r8``. +Currently the framework only analyzes one model dataset at a time. +To run another test for the the `MJO Propagation and Amplitude POD <../sphinx_pods/MJO_prop_amp.html>`__ +on the sample data from GFDL's CM4 model, open the configuration file at <*config_file_path*>, +delete or comment out the section for ``QBOi.EXP1.AMIP.001`` in the ``caselist`` section of that file, +and uncomment the section for ``GFDL.CM4.c96L32.am4g10r8``. In :doc:`start_config`, we describe further options to customize how the package is run. diff --git a/doc/sphinx/start_multirun.rst b/doc/sphinx/start_multirun.rst new file mode 100644 index 000000000..4e9180cb8 --- /dev/null +++ b/doc/sphinx/start_multirun.rst @@ -0,0 +1,66 @@ +.. role:: console(code) + :language: console + :class: highlight +Running the MDTF-diagnostics package in "multirun" mode +================================ + +Version 3 and later of the MDTF-diagnostics package provides support for "multirun" diagnostics that analyze output from +multiple model and/or observational datasets. At this time, the multirun implementation is experimental, and may only be +run on appropriately-formatted PODs. "Single-run" PODs that analyze one model dataset and/or one observational dataset +must be run separately because the configuration for single-run and multi-run analyses is different. Users and developers +should open issues when they encounter bugs or require additional features to support their PODs, or run existing PODs +on new datasets. + +The example_multicase POD and configuration +-------------------------------------------- +A multirun test POD called *example_multicase* is available in diagnostics/example_multicase that demonstrates +how to configure "multirun" diagnostics that analyze output from multiple datasets. +The `multirun_config_template.jsonc file +`__ +contains separate ``pod_list`` and ``case_list`` blocks. As with the single-run configuration, the ``pod_list`` may contain +multiple PODs separated by commas. The ``case_list`` contains multiple blocks of information for each case that the +POD(s) in the ``pod_list`` will analyze. The ``CASENAME``, ``convention``, ``FIRSTYR``, and ``LASTYR`` attributes must be +defined for each case. The ``convention`` must be the same for each case, but ``FIRSTYR`` and ``LASTYR`` +may differ among cases. +Directions for generating the synthetic data in the configuration file are provided in the file comments, and in the +quickstart section of the `README file +`__ + +The multirun implementation is triggered by setting ``data_type`` to "multi_run" in the environment settings section +of the configuration file, or via the command line. The default ``data_type`` value "single_run" if no value is defined +for ``data_type`` in the configuration file. + +As with `single_run` mode, the ``OBS_DATA_ROOT``, ``MODEL_DATA_ROOT``, and ``WORKING_DIR`` must be defined. +However, the ``OBS_DATA_ROOT`` does not require a subdirectory for the POD unless the POD analyzes an observational +dataset. The assumption for now is that multirun PODs will only analyze model datasets; settings for observational +data are retained for backwards compatibility, and needs of multirun POD developers will inform the modification +of the data management options moving forward. + +All other settings are identical to those described in the :doc:`configuration section <./start_config>`. + +POD output +-------------------------------------------- +The framework defines a root directory ``$WORKING_DIR/[POD name]`` for each +POD in the pod_list. ``$WORKING_DIR/[POD name]`` contains the the main framework log files, and subdirectories for each +case. Temporary copies of processed data for each case are placed in +``$WORKING_DIR/[pod_name]/[CASENAME]/[data output frequency]``. +The pod html file is written to ``$OUTPUT_DIR/[POD name]/[POD_name].html`` (``$OUTPUT_DIR`` defaults to ``$WORKING_DIR`` +if it is not defined), and the output figures are placed in +``$OUTPUT_DIR/[POD name]/model`` depending on how the paths are defined in the +POD's html template. + +Note that an obs directory is created by default, but will be empty unless the POD developer +opts to use an observational dataset and write observational data figures to this directory. +Figures that are generated as .eps files before conversion to .png files are written to +``$WORKING_DIR/[POD name]/model/PS``. + +Multirun environment variables +-------------------------------------------- +Multirun PODs obtain information for environment variables for the case and variable attributes +described in the :doc:`configuration section <./start_config>` +from a yaml file named *case_info.yaml* that the framework generates at runtime. The *case_info.yaml* file is written +to ``$WORKING_DIR/[POD name]``, and has a corresponding environment variable *case_env_file* that the POD uses to +parse the file. The *example_multicase.py* script demonstrates to how to read the environment variables from +*case_info.yaml* using the *case_env_file* environment variable into a dictionary, +then loop through the dictionary to obtain the post-processed data for analysis. An example *case_info.yaml* file +with environment variables defined for the synthetic test data is located in the *example_multicase* directory. \ No newline at end of file diff --git a/doc/sphinx/start_toc.rst b/doc/sphinx/start_toc.rst index c4b29956e..f6c0f5cfd 100644 --- a/doc/sphinx/start_toc.rst +++ b/doc/sphinx/start_toc.rst @@ -8,3 +8,4 @@ Getting started start_overview start_install start_config + start_multirun