Skip to content

Commit

Permalink
[develop]: Update documentation subsequent to PR #6 (#13)
Browse files Browse the repository at this point in the history
* update links & image file paths

* updates for PR#6

* update container build command

* update jedi-bundle ref and fix xlink

* update data bucket url
  • Loading branch information
gspetro-NOAA authored May 4, 2023
1 parent 19a164d commit 3a25d84
Show file tree
Hide file tree
Showing 8 changed files with 100 additions and 54 deletions.
3 changes: 2 additions & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
sphinxcontrib-bibtex
sphinx_rtd_theme
docutils==0.16
docutils==0.16
urllib3==1.26.15
11 changes: 6 additions & 5 deletions docs/source/BuildRunLandDA.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Clone the Land DA repository.

.. code-block:: console
git clone -b release/public-v1.0.0 --recursive https://github.com/NOAA-EPIC/land-offline_workflow.git
git clone -b develop --recursive https://github.com/ufs-community/land-DA_workflow.git
Build the Land DA System
***************************
Expand All @@ -79,7 +79,7 @@ Build the Land DA System

.. code-block:: console
cd $LANDDAROOT/land-offline_workflow
cd $LANDDAROOT/land-DA_workflow
module use modulefiles
module load landda_<machine>.intel
Expand All @@ -103,7 +103,8 @@ Build the Land DA System

.. code-block:: console
[100%] Built target ufsLandDriver.exe
[100%] Completed 'ufs-weather-model'
[100%] Built target ufs-weather-model
Additionally, the ``build`` directory will contain several files and a ``bin`` subdirectory with three executables:

Expand All @@ -114,7 +115,7 @@ Build the Land DA System
Configure the Experiment
***************************

#. Navigate back to the ``land-offline_workflow`` directory and check that the account/partition is correct in ``submit_cycle.sh``.
#. Navigate back to the ``land-DA_workflow`` directory and check that the account/partition is correct in ``submit_cycle.sh``.

.. code-block:: console
Expand All @@ -130,7 +131,7 @@ Configure the Experiment
where ``my_partition`` is the name of the partition on the user's system.


#. Configure other elements of the experiment if desired. The v1.0.0 release includes four scripts with default experiment settings:
#. Configure other elements of the experiment if desired. The ``develop`` branch includes four scripts with default experiment settings:

* ``settings_DA_cycle_gdas`` for running the Jan. 1-3, 2016 sample case.
* ``settings_DA_cycle_era5`` for running a Jan. 1-3, 2020 sample case.
Expand Down
42 changes: 26 additions & 16 deletions docs/source/Container.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ where ``/path/to/landda`` is the path to this top-level directory (e.g., ``/User
NOAA RDHPCS Systems
----------------------

On many NOAA :term:`RDHPCS` systems, a container named ``ubuntu20.04-intel-landda.img`` has already been built, and users may access the container at the locations in :numref:`Table %s <PreBuiltContainers>`.
On many NOAA :term:`RDHPCS` systems, a container named ``ubuntu20.04-intel-ue-landda.img`` has already been built, and users may access the container at the locations in :numref:`Table %s <PreBuiltContainers>`.

.. _PreBuiltContainers:

Expand All @@ -107,18 +107,22 @@ On many NOAA :term:`RDHPCS` systems, a container named ``ubuntu20.04-intel-landd
.. note::
Singularity is not available on Gaea, and therefore, container use is not supported on Gaea.

Users can simply set an environment variable to point to the container, as described in :numref:`Section %s <SetUpContainerC>`.
Users can simply set an environment variable to point to the container:

.. code-block:: console
export img=path/to/ubuntu20.04-intel-ue-landda.img
If users prefer, they may copy the container to their local working directory. For example, on Jet:

.. code-block:: console
cp /mnt/lfs4/HFIP/hfv3gfs/role.epic/containers/ubuntu20.04-intel-landda.img .
cp /mnt/lfs4/HFIP/hfv3gfs/role.epic/containers/ubuntu20.04-intel-ue-landda.img .
Other Systems
----------------

On other systems, users can build the Singularity container from a public Docker :term:`container` image or download the container from the `Land DA Data Bucket <https://noaa-ufs-land-da-pds.s3.amazonaws.com/index.html#current_land_da_release_data/>`__. Downloading may be faster depending on the download speed on the user's system.
On other systems, users can build the Singularity container from a public Docker :term:`container` image or download the ``ubuntu20.04-intel-landda.img`` container from the `Land DA Data Bucket <https://registry.opendata.aws/noaa-ufs-land-da/>`__. Downloading may be faster depending on the download speed on the user's system. However, the container in the data bucket is the ``release/v1.0.0`` container rather than the updated ``develop`` branch container.

To download from the data bucket, users can run:

Expand All @@ -130,7 +134,7 @@ To build the container from a Docker image, users can run:

.. code-block:: console
singularity build ubuntu20.04-intel-landda.img docker://noaaepic/ubuntu20.04-intel-landda:release-public-v1.0.0
singularity build --force ubuntu20.04-intel-ue-landda.img docker://noaaepic/ubuntu20.04-intel-ue-landda:unified-dev-testmp
This process may take several hours depending on the system.

Expand All @@ -145,7 +149,7 @@ Get Data

In order to run the Land DA System, users will need input data in the form of fix files, model forcing files, restart files, and observations for data assimilation. These files are already present on NOAA RDHPCS systems (see :numref:`Section %s <Level1Data>` for details).

Users on any system may download and untar the data from the `Land DA Data Bucket <https://noaa-ufs-land-da-pds.s3.amazonaws.com>`__ into their ``$LANDDAROOT`` directory.
Users on any system may download and untar the data from the `Land DA Data Bucket <https://registry.opendata.aws/noaa-ufs-land-da/>`__ into their ``$LANDDAROOT`` directory.

.. code-block:: console
Expand Down Expand Up @@ -174,7 +178,7 @@ Save the location of the container in an environment variable.

.. code-block:: console
export img=path/to/ubuntu20.04-intel-landda.img
export img=path/to/ubuntu20.04-intel-ue-landda.img
Set the ``USE_SINGULARITY`` environment variable to "yes".

Expand All @@ -188,28 +192,28 @@ Users may convert a container ``.img`` file to a writable sandbox. This step is

.. code-block:: console
singularity build --sandbox ubuntu20.04-intel-landda $img
singularity build --sandbox ubuntu20.04-intel-ue-landda $img
When making a writable sandbox on NOAA RDHPCS systems, the following warnings commonly appear and can be ignored:

.. code-block:: console
INFO: Starting build...
INFO: Verifying bootstrap image ubuntu20.04-intel-landda.img
INFO: Verifying bootstrap image ubuntu20.04-intel-ue-landda.img
WARNING: integrity: signature not found for object group 1
WARNING: Bootstrap image could not be verified, but build will continue.
From within the ``$LANDDAROOT`` directory, copy the ``land-offline_workflow`` directory out of the container.
From within the ``$LANDDAROOT`` directory, copy the ``land-DA_workflow`` directory out of the container.

.. code-block:: console
singularity exec -H $PWD $img cp -r /opt/land-offline_workflow .
singularity exec -H $PWD $img cp -r /opt/land-DA_workflow .
There should now be a ``land-offline_workflow`` directory in the ``$LANDDAROOT`` directory. Navigate into the ``land-offline_workflow`` directory. If for some reason, this is unsuccessful, users may try a version of the following command instead:
There should now be a ``land-DA_workflow`` directory in the ``$LANDDAROOT`` directory. Navigate into the ``land-DA_workflow`` directory. If for some reason, this is unsuccessful, users may try a version of the following command instead:

.. code-block:: console
singularity exec -B /<local_base_dir>:/<container_dir> $img cp -r /opt/land-offline_workflow .
singularity exec -B /<local_base_dir>:/<container_dir> $img cp -r /opt/land-DA_workflow .
where ``<local_base_dir>`` and ``<container_dir>`` are replaced with a top-level directory on the local system and in the container, respectively. Additional directories can be bound by adding another ``-B /<local_base_dir>:/<container_dir>`` argument before the container location (``$img``).

Expand All @@ -221,11 +225,11 @@ where ``<local_base_dir>`` and ``<container_dir>`` are replaced with a top-level

Sometimes binding directories with different names can cause problems. In general, it is recommended that the local base directory and the container directory have the same name. For example, if the host system's top-level directory is ``/user1234``, the user may want to convert the ``.img`` file to a writable sandbox and create a ``user1234`` directory in the sandbox to bind to.

Navigate to the ``land-offline_workflow`` directory after it has been successfully copied into ``$LANDDAROOT``.
Navigate to the ``land-DA_workflow`` directory after it has been successfully copied into ``$LANDDAROOT``.

.. code-block:: console
cd land-offline_workflow
cd land-DA_workflow
When using a Singularity container, Intel compilers and Intel :term:`MPI` (preferably 2020 versions or newer) need to be available on the host system to properly launch MPI jobs. Generally, this is accomplished by loading a module with a recent Intel compiler and then loading the corresponding Intel MPI. For example, users can modify the following commands to load their system's compiler/MPI combination:

Expand Down Expand Up @@ -263,13 +267,19 @@ Run the Experiment

The Land DA System uses a script-based workflow that is launched using the ``do_submit_cycle.sh`` script. That script requires an input file that details all the specifics of a given experiment. EPIC has provided four sample ``settings_*`` files as examples: ``settings_DA_cycle_gdas``, ``settings_DA_cycle_era5``, ``settings_DA_cycle_gdas_restart``, and ``settings_DA_cycle_era5_restart``. The ``*restart`` settings files will only work after an experiment with the corresponding non-restart settings file has been run. This is because they are designed to use the restart files created by the first experiment cycle to pick up where it left off. (e.g., ``settings_DA_cycle_gdas`` runs from 2016-01-01 at 18z to 2016-01-03 at 18z. The ``settings_DA_cycle_gdas_restart`` will run from 2016-01-03 at 18z to 2016-01-04 at 18z.)

First, update the ``$BASELINE`` environment variable in the selected ``settings_DA_*`` file to say ``singularity.internal`` instead of ``hera.internal``:

.. code-block:: console
export BASELINE=singularity.internal
To start the experiment, run:

.. code-block:: console
./do_submit_cycle.sh settings_DA_cycle_gdas
The ``do_submit_cycle.sh`` script will read the ``settings_DA_cycle_*`` file and the ``release.environment`` file, which contain sensible experiment default values to simplify the process of running the workflow for the first time. Advanced users will wish to modify the parameters in ``do_submit_cycle.sh`` to fit their particular needs. After reading the defaults and other variables from the settings files, ``do_submit_cycle.sh`` creates a working directory (named ``workdir`` by default) and an output directory called ``landda_expts`` in the parent directory of ``land-offline_workflow`` and then submits a job (``submit_cycle.sh``) to the queue that will run through the workflow. If all succeeds, users will see ``log`` and ``err`` files created in ``land-offline_workflow`` along with a ``cycle.log`` file, which will show where the cycle has ended. The ``landda_expts`` directory will also be populated with data in the following directories:
The ``do_submit_cycle.sh`` script will read the ``settings_DA_cycle_*`` file and the ``release.environment`` file, which contain sensible experiment default values to simplify the process of running the workflow for the first time. Advanced users will wish to modify the parameters in ``do_submit_cycle.sh`` to fit their particular needs. After reading the defaults and other variables from the settings files, ``do_submit_cycle.sh`` creates a working directory (named ``workdir`` by default) and an output directory called ``landda_expts`` in the parent directory of ``land-DA_workflow`` and then submits a job (``submit_cycle.sh``) to the queue that will run through the workflow. If all succeeds, users will see ``log`` and ``err`` files created in ``land-DA_workflow`` along with a ``cycle.log`` file, which will show where the cycle has ended. The ``landda_expts`` directory will also be populated with data in the following directories:

.. code-block:: console
Expand Down
11 changes: 5 additions & 6 deletions docs/source/DASystem.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Land Data Assimilation System
***************************************************

This chapter describes the configuration of the offline Land :term:`Data Assimilation` (DA) System, which utilizes the UFS Noah-MP components with the JEDI ``fv3-bundle`` to enable cycled model forecasts. The data assimilation framework applies the Local Ensemble Transform Kalman Filter-Optimal Interpolation (LETKF-OI) algorithm to combine the state-dependent background error derived from an ensemble forecast with the observations and their corresponding uncertainties to produce an analysis ensemble (:cite:t:`HuntEtAl2007`, 2007).
This chapter describes the configuration of the offline Land :term:`Data Assimilation` (DA) System, which utilizes the UFS Noah-MP component together with JEDI's ``jedi-bundle`` (Skylab v3.0) to enable cycled model forecasts. The data assimilation framework applies the Local Ensemble Transform Kalman Filter-Optimal Interpolation (LETKF-OI) algorithm to combine the state-dependent background error derived from an ensemble forecast with the observations and their corresponding uncertainties to produce an analysis ensemble (:cite:t:`HuntEtAl2007`, 2007).

Joint Effort for Data Assimilation Integration (JEDI)
********************************************************
Expand Down Expand Up @@ -622,7 +622,7 @@ GHCN files for 2016 and 2020 are already provided in IODA format for the v1.0.0

In each experiment, the ``DA_config`` file sets the name of the experiment configuration file. This configuration file is typically named ``settings_DA_test``. Before assimilation, if "GHCN" was specified as the observation type in the ``DA_config`` file, the ``ghcn_snwd_ioda_${YYYY}${MM}${DD}.nc`` file corresponding to the specified cycle date is soft-linked to the JEDI working directory (``${JEDIWORKDIR}``) with a naming-convention change (i.e., ``GHCN_${YYYY}${MM}${DD}${HH}.nc``). Here, the GHCN IODA file is appended with the cycle hour, ``${HH}`` which is extracted from the ``${STARTDATE}`` variable defined in the relevant ``DA_config`` file.

Prior to ingesting the GHCN IODA files via the LETKF at the DA analysis time, the observations are further quality controlled and checked using ``letkf_land.yaml`` (itself a concatenation of ``GHCN.yaml`` and ``letkfoi_snow.yaml``; see the `GitHub yaml files <https://github.com/NOAA-EPIC/land-DA_update/tree/31191c913a624d7fab479dc429d44ff102cd3809/jedi/fv3-jedi/yaml_files>`__ for more detail). The GHCN-specific observation filters, domain checks, and quality control parameters from ``GHCN.yaml`` ensure that only snow depth observations which meet specific criteria are assimilated (the rest are rejected). The contents of this YAML are listed below:
Prior to ingesting the GHCN IODA files via the LETKF at the DA analysis time, the observations are further quality controlled and checked using ``letkf_land.yaml`` (itself a concatenation of ``GHCN.yaml`` and ``letkfoi_snow.yaml``; see the `GitHub yaml files <https://github.com/ufs-community/land-DA/tree/660d64da52bbe6fd5ccf29dad05fe6be3f10e749/jedi/fv3-jedi/yaml_files>`__ for more detail). The GHCN-specific observation filters, domain checks, and quality control parameters from ``GHCN.yaml`` ensure that only snow depth observations which meet specific criteria are assimilated (the rest are rejected). The contents of this YAML are listed below:

.. code-block:: console
Expand Down Expand Up @@ -910,7 +910,7 @@ The ``do_submit_cycle.sh`` script sets up the cycling job based on the user's in

.. _DoSubmitCyclePng:

.. figure:: https://github.com/NOAA-EPIC/land-offline_workflow/wiki/do_submit_cycle.png
.. figure:: https://github.com/ufs-community/land-DA_workflow/wiki/do_submit_cycle.png
:alt: The do submit cycle shell script starts by loading configuration files and modules. Then it proceeds to set executables, read in dates, compute forecast run variables, and setup work and output directories for the model. If a restart directory is available in the model output directory, it creates the dates file and submits the submit cycle shell script. If there is no output file, the script will try to copy restart files from an initial conditions directory before creating the dates file and submitting the submit cycle shell script.

*Flowchart of 'do_submit_cycle.sh'*
Expand All @@ -925,7 +925,7 @@ The ``submit_cycle.sh`` script first exports the required paths and loads the re

.. _SubmitCyclePng:

.. figure:: https://github.com/NOAA-EPIC/land-offline_workflow/wiki/submit_cycle.png
.. figure:: https://github.com/ufs-community/land-DA_workflow/wiki/submit_cycle.png
:alt: The submit cycle shell script starts by exporting paths and loading required modules. Then it starts a loop for the cycle. For each cycle in the experiment, it gets the data assimilation settings and date/time info; computes the restart frequency, run days, and run hours; and copies the restarts into the work directory. If the user is running a DA experiment, the script updates and submits the vector to tile namelists and submits the snow data assimilation. Then it submits the tile to vector namelists and saves the analysis restart. Regardless of whether DA is being used, the script runs the forecast model, updates the model namelist, and submits the land surface model. It checks existing model output, and then the loop ends. If there are more cycles to run, the script will run through this loop until they are complete.

*Flowchart of 'submit_cycle.sh'*
Expand Down Expand Up @@ -1037,7 +1037,7 @@ The ``do_landDA.sh`` runs the data assimilation job inside the ``sumbit_cycle.sh

.. _DoLandDAPng:

.. figure:: https://github.com/NOAA-EPIC/land-offline_workflow/wiki/do_landDA.png
.. figure:: https://github.com/ufs-community/land-DA_workflow/wiki/do_landDA.png
:alt: The do land da shell script starts by reading in the configuration file and setting up directories. Then it formats date strings, stages restart files, and prepares the observation files. It constructs yaml files based on requested JEDI type and then proceeds to create the background ensembles using LETKF-OI. Next, the script runs JEDI and applies the increment to use restarts. Lastly, it performs clean-up operations.

*Flowchart of 'do_landDA.sh'*
Expand Down Expand Up @@ -1102,4 +1102,3 @@ Below, users can find an example of a configuration settings file, ``settings_DA

``fv3bundle_vn``
Specifies the date for JEDI ``fv3-bundle`` checkout (used to select correct ``yaml``).

Loading

0 comments on commit 3a25d84

Please sign in to comment.