This document describes how to set up a development environment to modify, build and test MODFLOW 6. Details on how to contribute your code to the repository are found in the separate document CONTRIBUTING.md.
To build and test a parallel version of the program, first read the instructions below and then continue in PARALLEL.md.
- Prerequisites
- Get the MODFLOW 6 repository
- Install the python environment
- Building
- Formatting
- Testing
- Generating makefiles
- Branching model
- Deprecation policy
Before you can build and test MODFLOW 6, you must install and configure the following on your development machine:
- git
- Python3.9+
- a modern Fortran compiler
Some additional, optional tools are also discussed below.
Git and/or the GitHub app (for Mac or Windows). GitHub's Guide to Installing Git is a good source of information.
Optionally, the git blame
tool can be configured to work locally using:
git config blame.ignoreRevsFile .git-blame-ignore-revs
Python 3.9+ is required to run MODFLOW 6 tests and in some cases to build MODFLOW 6. Information on installing the python environment is given in the Installing Python environment section. The MODFLOW 6 python environment should be installed after locally cloning the repository.
GNU Fortran or Intel Fortran compilers can be used to build MODFLOW 6. It may be possible to build MODFLOW 6 with other compilers, but this cannot be guaranteed.
GNU Fortran can be installed on all three major platforms.
Linux
- Fedora-based:
dnf install gcc-gfortran
- Debian-based:
apt install gfortran
macOS
Note: Xcode 15 includes a new linker implementation which breaks GNU Fortran compatibility. A workaround is to set LDFLAGS
to use the classic linker, for instance:
export LDFLAGS="$LDFLAGS -Wl,-ld_classic"
See this ticket on the Meson repository for more information.
Windows
Minimalist GNU for Windows is the recommended way to obtain the GCC toolchain on Windows. Several MinGW distributions are available.
To install with Chocolatey: choco install mingw
To install from SourceForge:
- Download the MinGW installer: https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/installer/mingw-w64-install.exe
- Run the installer. Make sure to change
Architecture
tox86_64
. Leave the other settings on default. - Find the
mingw64/bin
directory in the installation and add it to your PATH. FindEdit the system environment variables
in your Windows Start Screen. Click theEnvironmental Variables
button and double-click thePath
variable in the User Variables (the top table). Click theNew
button and enter the location of themingw64/bin
directory.
Binaries may also be downloaded and installed from the releases here.
Note: the MinGW distribution available on conda-forge includes an outdated version of GCC and is not compatible with MODFLOW 6.
Intel Fortran can also be used to compile MODFLOW 6 and associated utilities. The ifort
and ifx
compilers are available in the Intel oneAPI HPC Toolkit.
A number of environment variables must be set before using Intel Fortran. General information can be found here, with specific instructions to configure a shell session for ifort
here.
While the current development version of MODFLOW 6 is broadly compatible with ifort
, ifx
compatibility is still limited on Ubuntu and Windows, and ifx
is not supported on macOS.
On Windows, Visual Studio and a number of libraries must be installed for ifort
and ifx
to work. The required libraries can be installed by ticking the "Desktop Development with C++" checkbox in the Visual Studio Installer's Workloads tab.
Note: Invoking the setvars.bat
scripts from a Powershell session will not put ifort
or ifx
on the path, since batch script environments are local to their process. To relaunch PowerShell with oneAPI variables configured:
cmd.exe "/K" '"C:\Program Files (x86)\Intel\oneAPI\setvars-vcvarsall.bat" && "C:\Program Files (x86)\Intel\oneAPI\compiler\latest\env\vars.bat" && powershell'
The following tables are automatically generated by a CI workflow.
runner | gcc 10 | gcc 11 | gcc 12 | gcc 13 | gcc 7 | gcc 8 | gcc 9 | intel-classic 2021.1 | intel-classic 2021.10 | intel-classic 2021.2 | intel-classic 2021.3 | intel-classic 2021.4 | intel-classic 2021.5 | intel-classic 2021.6 | intel-classic 2021.7 | intel-classic 2021.8 | intel-classic 2021.9 | intel 2021.1 | intel 2021.2 | intel 2021.4 | intel 2022.0 | intel 2022.1 | intel 2022.2.1 | intel 2022.2 | intel 2023.0 | intel 2023.1 | intel 2023.2 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
macos-11 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
macos-12 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
ubuntu-20.04 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
ubuntu-22.04 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||
windows-2019 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||||||
windows-2022 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
runner | gcc 10 | gcc 11 | gcc 12 | gcc 13 | gcc 7 | gcc 8 | gcc 9 | intel-classic 2021.1 | intel-classic 2021.10 | intel-classic 2021.2 | intel-classic 2021.3 | intel-classic 2021.4 | intel-classic 2021.5 | intel-classic 2021.6 | intel-classic 2021.7 | intel-classic 2021.8 | intel-classic 2021.9 | intel 2021.1 | intel 2021.2 | intel 2021.4 | intel 2022.0 | intel 2022.1 | intel 2022.2.1 | intel 2022.2 | intel 2023.0 | intel 2023.1 | intel 2023.2 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
macos-11 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||||
macos-12 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||||||||
ubuntu-20.04 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||||||||
ubuntu-22.04 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||||||||
windows-2019 | ✓ | ✓ | ✓ | ||||||||||||||||||||||||
windows-2022 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Some other tools are useful but not required to develop MODFLOW 6.
GNU Make
This repository provides makefiles, generated by mfpymake
, which can be used to build MODFLOW 6 with GNU Make. For further instructions we refer to the GNU Make Manual.
Visual Studio
Visual Studio installers can be downloaded from the official website. MODFLOW 6 solution files can be found in the msvs
folder.
Doxygen & LaTeX
Doxygen is used to generate the MODFLOW 6 source code documentation. Graphviz is used by doxygen to produce source code diagrams. LaTeX is used to generate the MODFLOW 6 release notes and Input/Output documents.
These programs can be installed from various sources, including by conda, macports, or from individual sources such as https://www.tug.org/. Details about USGS LaTeX libraries can be seen in addition to linux installs in the CI workflow for the docs (.github/workflows/ci-docs.yml
).
Fork and clone the MODFLOW 6 repository:
- Login to your GitHub account or create one by following the instructions given here.
- Fork the main MODFLOW 6.
- Clone your fork of the MODFLOW 6 repository and create an
upstream
remote pointing back to your fork.
After forking the MODFLOW 6 repository on GitHub.
- Clone your fork of the GitHub repository to your computer.
git clone git@github.com:<github username>/modflow6.git
- Go to the MODFLOW 6 directory.
cd modflow6
- Add the main MODFLOW 6 repository as an upstream remote to your repository.
git remote add upstream https://github.com/MODFLOW-USGS/modflow6.git
Python 3.9+ is required to run MODFLOW 6 tests and in some cases to build MODFLOW 6. Miniforge is the recommended python distribution if you do not have an existing Conda or Mamba based python distribution.
The environment file for MODFLOW 6 includes all of the required python dependencies. Install the modflow6
environment using the Conda environment.yml
file in the repository.
- Open a terminal (command prompt) in the root directory of the repository.
- Use either Mamba or Conda to install the
modflow6
environment.
mamba env create -f environment.yml
conda env create -f environment.yml
Python can also be installed via Pixi. Pixi is currently being used to install python on GitHub Actions continuous integration/continuous development (CI/CD) virtual machines. In the future, Pixi may be the preferred approach for installing python for MODFLOW 6. As a result it is recommended for developers to also install the Pixi python environment, which can coexist with the Mamba/Conda python installation and modflow6
environment.
Pixi installation docs can be found here. After installing pixi
, to set up an environment with all development dependencies, in the root directory of the MODFLOW 6 repository run:
pixi run install
This project depends critically on a few Python packages for building, linting, spell checking, and testing tasks:
meson
codespell
fprettify
ruff
mfpymake
flopy
modflow-devtools
These are each described briefly below. These and a number of other dependencies are build-, test-, or release-time dependencies are included the Pixi environment pixi.toml
as well as the Conda environment.yml
file in this repository.
Meson is the recommended build system for MODFLOW 6.
codespell
was designed primarily for checking misspelled words in source code, but can be used with other text files as well. This tool can be used from the command line or integrated with a VSCode. See Spell check guidelines for additional information.
fprettify
can be used to format Fortran source code and in combination with the MODFLOW 6 fprettify configuration establishes a contribution standard for properly formatted MODFLOW 6 Fortran source. This tool can be used from the command line or integrated with a VSCode or Visual Studio development environment. See Fortran formatting guidelines for additional information.
ruff
can be used to format and lint python code and scripts (for example, autotest scripts) and in combination with the MODFLOW 6 ruff configuration establishes a contribution standard for properly formatted python code and scripts. This tool can be used from the command line or integrated with a VSCode. See python formatting guidelines and python linting guidelines for additional information.
The mfpymake
package can build MODFLOW 6 and related programs and artifacts (e.g. makefiles), and is used in particular by the distribution/build_makefiles.py
script.
flopy
is used throughout MODFLOW 6 tests to create, run and post-process models.
Like MODFLOW 6, flopy
is modular — for each MODFLOW 6 package there is generally a corresponding flopy
package. Packages are generated dynamically from DFN files stored in this repository under doc/mf6io/mf6ivar/dfn
.
The tests use a set of shared fixtures and utilities provided by the modflow-devtools
package.
Meson is the recommended build tool for MODFLOW 6. Meson must be installed and on your PATH. Creating and activating the provided Pixi or Conda environment should be sufficient for this.
Meson build configuration files are provided for MODFLOW 6, for the ZONEBUDGET and MODFLOW 2005 to 6 converter utility programs, and for Fortran unit tests (see Testing section below).
meson.build
utils/zonebudget/meson.build
utils/mf5to6/meson.build
autotest/meson.build
Building MODFLOW 6 requires two steps:
- configure the build directory
- build the project
To configure the build directory for a debug version:
meson setup --prefix=$(pwd) --libdir=bin builddir -Ddebug=true
Or to configure the build directory for an optimized release version:
meson setup --prefix=$(pwd) --libdir=bin builddir
or using pixi to setup the build directory:
pixi run setup builddir
Debug versions can be built using pixi by adding -Ddebug=true
at the end of the pixi command. Other meson commands (for example, -Dparallel=true
, --wipe
, etc.) added to the pixi command are passed through to Meson.
Substitute %CD%
as necessary on Windows.
To build MODFLOW 6 and install binaries to <project root>/bin/
:
meson install -C builddir
or using pixi:
pixi run build builddir
Note: If using Visual Studio Code, you can use tasks as described here to automate the above.
Fortran source files, python files, definition files, markdown, and LaTeX files can be checked with codespell. codespell was designed primarily for checking misspelled words in source code, but it can be used with other text files as well. The codespell
package is included in the Conda environment.yml
and the Pixi pixi.toml
files and can be run directly, via Pixi, or via VSCode tasks.
To check whether the repository's Fortran source files, python files, definition files, markdown, and LaTeX files have any spelling errors without making any changes:
pixi run check-spelling
Or, from an environment with codespell
installed, simply
codespell
To fix spelling errors in all files, use -w
(--write-changes
). When run in this way, the tool will modify the file in place. If unresolvable errors are encountered, these are written to standard output and must be manually fixed before attempting to rerun the tool.
Note: Spell checking by codespell may make unwanted changes (for example, a variable name in source code). As a result, you should check the codespell
changes. codespell can be forced to leave a particular word unchanged by adding it to the .codespell.ignore
file.
Fortran source code can be formatted with fprettify, specifying the MODFLOW 6 fprettify configuration. The fprettify
package is included in the Conda environment.yml
and the Pixi pixi.toml
files and can be run directly, via Pixi, or via VSCode tasks.
For instance, to format a single file:
fprettify -c .fprettify.yaml ./utils/zonebudget/src/zbud6.f90
When run in this way, the tool will modify the file in place and generate no output if successful. If unresolvable formatting errors are encountered (e.g. for excess line length), these are written to standard output and must be manually fixed before attempting to rerun the tool.
To check whether the repository's source files satisfy formatting requirements without making any changes:
python .github/common/check_format.py
or using pixi:
pixi run check-format
To format all files, add the --write-changes
flag to the end of the python or pixi commands. These commands will exclude the proper files from formatting, including vendored library sources in src/Utilities/Libraries
.
Note: as fprettify
may shift code in unexpected ways, it is a good idea to visually check source files afterwards.
Python code and scripts can be formatted with ruff, specifying the MODFLOW 6 ruff configuration. The ruff
package is included in the Conda environment.yml
and Pixi pixi.toml
files and can be run directly, via Pixi, or via VSCode tasks.
For instance, to format a single file:
ruff format autotest/test_gwe_cnd.py
When run in this way, ruff
will modify the file in place and generate no output if successful. If unresolvable formatting errors are encountered, these are written to standard output and must be manually fixed before attempting to rerun the tool.
To check whether the repository's python code and scripts satisfy formatting requirements without making any changes:
ruff format --check .
or using pixi:
pixi run check-python-format
To format all files, remove the --check
flag from the python command or run the pixi command:
pixi run fix-python-format
Linting is the automated checking of source code for programmatic and stylistic errors. python code and scripts can be linted with ruff, specifying the MODFLOW 6 ruff configuration. The ruff
package is included in the Conda environment.yml
and Pixi pixi.toml
files and can be run directly, via Pixi, or via VSCode tasks.
For instance, to lint a single file:
ruff check --fix autotest/test_gwe_cnd.py
When run in this way, ruff
will modify the file in place and generate no output if successful. If unresolvable formatting errors are encountered, these are written to standard output and must be manually fixed before attempting to rerun the tool.
To check whether the repository's python code and scripts satisfy linting requirements without making any changes:
ruff check .
or using pixi:
pixi run check-python-lint
To format all files, add the --fix
flag to the python command or pixi command. Alternatively with pixi run:
pixi run fix-python-lint
MODFLOW 6 unit tests are written in Fortran with test-drive
.
MODFLOW 6 integration tests are written in Python with pytest
. Integration testing dependencies are included in Pixi and Conda environments.
Note: the entire test suite should pass before a pull request is submitted. Tests run in GitHub Actions CI and a PR can only be merged with passing tests. See CONTRIBUTING.md
for more information.
Before running tests, there are a few steps to complete. Most importantly, the local development version of MODFLOW 6 must be built, e.g. with Meson as described above.
The autotest/build_exes.py
script is provided as a shortcut to rebuild local binaries. It can be invoked as a standard Python script or with Pytest. By default, binaries are placed in the bin
directory relative to the project root, as in the Meson commands described above. To change the location of the binaries, use the --path
option.
Unit tests are driven with Meson. A small number of Meson-native tests are defined in the top-level meson.build
file to check that MODFLOW 6 has installed successfully. These require no additional configuration.
Additional Fortran unit tests are defined with test-drive
in the autotest/
folder, with test files named Test*.f90
. If Meson fails to find the test-drive
library via pkg-config
, these will be skipped.
To install test-drive
:
- Clone the
test-drive
repository - Setup/build with Meson, e.g. in a Unix shell from the
test-drive
project root:
meson setup builddir --prefix=$PWD --libdir=lib
meson install -C builddir
- Add
<test-drive project root>/lib/pkgconfig
to thePKG_CONFIG_PATH
environment variable. - To confirm that
test-drive
is detected bypkg-config
, runpkg-config --libs test-drive
.
Meson should now detect the test-drive
library when building MODFLOW 6.
Note: the test-drive
source code is not yet compatible with recent versions of Intel Fortran, building with gfortran
is recommended.
See the Running unit tests section for instructions on running unit tests.
A few more tasks must be completed before integration testing:
- install MODFLOW-related executables
- ensure FloPy packages are up to date
- install MODFLOW 6 example/test models
As mentioned above, binaries live in the bin
subdirectory of the project root. This directory is organized as follows:
- local development binaries in the top-level
bin
- binaries rebuilt in development mode from the latest MODFLOW 6 release in
bin/rebuilt/
- related programs installed from the executables distribution in
bin/downloaded/
Tests require the latest official MODFLOW 6 release to be compiled in develop mode with the same Fortran compiler as the development version. A number of binaries distributed from the executables repo must also be installed. The script autotest/get_exes.py
does both of these things. It can be run from the project root with:
pixi run get-exes
Alternatively, from the autotest/
directory:
pytest get_exes.py
As above, binaries are placed in the bin
subdirectory of the project root, with nested bin/downloaded
and bin/rebuilt
subdirectories containing the rebuilt latest release and downloaded binaries, respectively.
FloPy packages should be regenerated from DFN files before running tests for the first time or after definition files change. This can be done with the autotest/update_flopy.py
script, which wipes and regenerates package classes for the FloPy installed in the Python environment.
Note: if you've installed an editable local version of FloPy from source, running this script can overwrite files in your repository.
There is a single optional argument, the path to the folder containing definition files. By default DFN files are assumed to live in doc/mf6io/mf6ivar/dfn
, making the following functionally identical:
pixi run update-flopy
which uses the default dfn path. Or the location of the definition files can be explitily defined using:
pixi run update-flopy doc/mf6io/mf6ivar/dfn
Alternatively, run python update_flopy.py
directly from autotest/
.
Any time a MODFLOW 6 input definition file (dfn) has been changed internal MODFLOW 6 Fortran definitions should be updated as well. This can be accomplished locally by running utils/idmloader/scripts/dfn2f90.py
and then recompiling. This script will update the appropriate input package Fortran definition files if the dfn change is relevant to input processing. Updated Fortran definition files should accompany related dfn file changes when creating a pull request.
cd utils/idmloader/scripts
python dfn2f90.py
or using pixi:
pixi run update-fortran-definitions
Some autotests load models from external repositories:
MODFLOW-USGS/modflow6-testmodels
MODFLOW-USGS/modflow6-largetestmodels
MODFLOW-USGS/modflow6-examples
See the MODFLOW devtools documentation for instructions to install external model repositories.
MODFLOW 6 has two kinds of tests: Fortran unit tests, driven with Meson, and Python integration tests, driven with Pytest.
Unit tests must be run from the project root. To run unit tests in verbose mode:
meson test -C builddir
or using pixi:
pixi run test builddir
Unit tests can be selected by module name (as listed in autotest/tester.f90
). For instance, to test the ArrayHandlersModule
:
meson test -C builddir --verbose ArrayHandlers
or using pixi:
pixi run test builddir --verbose ArrayHandlers
To run a test module in the gdb
debugger, just add the --gdb
flag to the test command.
Integration tests must be run from the autotest/
folder if invoked with pytest
directly — the Pixi autotest
task can be invoked from the project root.
To run tests in parallel:
cd autotest/
pytest -v -n auto # from autotest/
or using pixi:
pixi run autotest
The Pixi autotest
task includes options to run tests in parallel, show test runtimes, and save failed test results in autotest/.failed/
.
Note: The -n
option accepts an integer argument for the number of parallel processes. If the value auto
is provided, pytest-xdist
will use one worker per available processor.
Markers can be used to select subsets of tests. Markers provided in pytest.ini
include:
slow
: tests that take longer than a few seconds to completerepo
: tests that require external model repositorieslarge
: tests using large models (from themodflow6-examples
andmodflow6-largetestmodels
repos)regression
: tests comparing results from multiple versions
Markers can be used with the -m <marker>
option, and can be applied in boolean combinations with and
, or
and not
. For instance, to run fast tests in parallel, excluding regression tests:
pytest -v -n auto -m "not slow and not regression"
The --smoke
(short -S
) flag, provided by modflow-devtools
is an alias for the above:
pytest -v -n auto -S
or using pixi:
pixi run autotest -S
Smoke testing is a form of integration testing which aims to test a decent fraction of the codebase quickly enough to run often during development.
Tests using models from external repositories can be selected with the repo
marker:
pytest -v -n auto -m "repo"
The large
marker is a subset of the repo
marker. To test models excluded from commit-triggered CI and only run on GitHub Actions nightly:
pytest -v -n auto -m "large"
Tests load external models from fixtures provided by modflow-devtools
. External model tests can be selected by model or simulation name, or by packages used. See the modflow-devtools
documentation for usage examples. Note that filtering options only apply to tests using external models, and will not filter tests defining models in code — for that, the pytest
built-in -k
option may be used.
To add a new unit test:
- Add a file containing a test module, e.g.
TestArithmetic.f90
, to theautotest/
folder.
module TestArithmetic
use testdrive, only : error_type, unittest_type, new_unittest, check, test_failed
implicit none
private
public :: collect_arithmetic
contains
subroutine collect_arithmetic(testsuite)
type(unittest_type), allocatable, intent(out) :: testsuite(:)
testsuite = [new_unittest("add", test_add)]
end subroutine collect_arithmetic
subroutine test_add(error)
type(error_type), allocatable, intent(out) :: error
call check(error, 1 + 1 == 2, "Math works")
if (allocated(error)) then
call test_failed(error, "Math is broken")
return
end if
end subroutine test_add
end module TestArithmetic
- Add the module name to the list of
tests
inautotest/meson.build
, omitting the leading "Test".
tests = [
'Arithmetic',
]
- Add a
use
statement for the test module inautotest/tester.f90
, and add it to the array oftestsuites
.
use TestArithmetic, only: collect_arithmetic
...
testsuites = [ &
new_testsuite("Arithmetic", collect_arithmetic), &
new_testsuite("something_else", collect_something_else) &
]
- Rebuild with Meson from the project root, e.g.
meson install -C builddir
. The test should now be picked up whenmeson test...
is next invoked.
Integration tests should ideally follow a few conventions for easier maintenance:
-
Use temporary directory fixtures. Tests which write to disk should use
pytest
's built-intmp_path
fixtures or one of the keepable temporary directory fixtures frommodflow-devtools
. This prevents tests from polluting one another's state. -
Use markers for convenient (de-)selection:
@pytest.mark.slow
if the test doesn't complete in a few seconds (this preserves the ability to quickly--smoke
test@pytest.mark.repo
if the test relies on external model repositories@pytest.mark.regression
if the test compares results from different versions
Note: If all three external model repositories are not installed as described above, some tests will be skipped. The full test suite includes >750 cases. All must pass before changes can be merged into this repository.
A framework has been developed to streamline common testing patterns. The TestFramework
class, defined in autotest/framework.py
, is used by most test scripts to configure, run and evaluate one or more MF6 simulations, optionally in comparison with another simulation or model.
Generally, the recommended pattern for a test script is:
import ...
cases = ["a", "b", ...]
variable = [1., 0., ...]
expected = [-1., -1.1, ...]
def build_models(idx, test):
v = variable[idx]
...
def check_output(idx, test):
e = expected[idx]
...
@pytest.mark.parametrize("idx, name", enumerate(cases))
def test_mf6model(idx, name, function_tmpdir, targets):
test = TestFramework(
name=name,
workspace=function_tmpdir,
targets=targets,
build=lambda t: build_models(idx, t),
check=lambda t: check_output(idx, t),
compare=None,
)
test.run()
The framework has two hooks:
build
: construct one or more MF6 simulations and/or non-MF6 models with FloPycheck
: evaluate simulation/model output
A test script conventionally contains one or more test cases, fed to the test function as idx, name
pairs. idx
can be used to index parameter values or expected results for a specific test case. The test case name
is useful for model/subdirectory naming, etc.
The framework will not run an unknown program. The path to any program under test (or used for a comparison) must be registered in the targets
dictionary. Keys are strings. See autotest/conftest.py
for the contents of targets
— naming follows the executables distribution.
The .run()
function
- builds simulations/models
- runs simulations/models
- compares simulation/model outputs
- checks outputs against expectations
A compare
parameter may be provided on initialization, which enables comparison of outputs against another program or the latest official release of MF6. The following values are supported:
None
: disables comparison — the test simply runs/evaluates any registered simulations/models without comparing resultsauto
: attempt to detect the comparison type from contents of test workspace, otherwise skipping comparisonmf6_regression
: compare results against the latest official release rebuilt in develop modemf6
,mf2005
,mfnwt
, ormflgr
: compare with results from the selected program — a corresponding model must be provided inbuild_models()
After running the reference and comparison models, the framework will try to find correspondingly named output files to compare — comparison logic may need adjustment when writing tests for new packages or models.
Run build_makefiles.py
in the distribution/
directory after adding, removing, or renaming source files. This script uses Pymake to regenerate makefiles. For instance:
cd distribution/
python build_makefiles.py
or using pixi:
pixi run build-makefiles
If the utilities located in the utils
directory (e.g., mf5to6
and zbud6
) are affected by changes to the modflow6 src/
directory (such as new or refactored source files), then the new module source file should also be added to the utility's utils/<util>/pymake/extrafiles.txt
file. This file informs Pymake of source files living outside the main source directory, so they can be included in generated makefiles.
Module dependencies for features still under development should be added to excludefiles.txt
. Source files listed in this file will be excluded from makefiles generated by Pymake. Makefiles should only include the source files needed to the build officially released/supported features.
Makefile generation and usage can be tested from the distribution
directory by running the build_makefiles.py
script with Pytest:
pytest -v build_makefiles.py
Note: make
is required to test compiling MODFLOW 6 with makefiles. If make
is not discovered on the system path, compile tests will be skipped.
Makefiles may also be tested manually by changing to the appropriate make
subdirectory (of the project root for MODFLOW 6, or inside the corresponding utils
subdirectory for the zonebudget or converter utilities) and invoking make
(make clean
may first be necessary to remove previously created object files).
On Windows, it is recommended to generate and test makefiles from a Unix-like shell rather than PowerShell or Command Prompt. Make can be installed via Conda or Chocolatey. Alternatively, it is included with mingw, which is also available from Chocolatey.
To use Conda from Git Bash on Windows, first run the conda.sh
script located in your Conda installation's /etc/profile.d
subdirectory. For instance, with Anaconda3:
. /c/Anaconda3/etc/profile.d/conda.sh
Or Miniconda3:
. /c/ProgramData/miniconda3/etc/profile.d/conda.sh
After this, conda
commands should be available.
This command may be added to a .bashrc
or .bash_profile
file in your home directory to permanently configure Git Bash for Conda.
This section documents MODFLOW 6 branching strategy and other VCS-related procedures.
This project follows the git flow: development occurs on the develop
branch, while master
is reserved for the state of the latest release. Development PRs are typically squashed to develop
to avoid merge commits. At release time, release branches are merged to master
, and then master
is merged back into develop
.
When a feature branch takes a long time to develop, it is easy to become out of sync with the develop branch. Depending on the situation, it may be advisable to periodically squash the commits on the feature branch and rebase the change set with develop. The following approach for updating a long-lived feature branch has proven robust.
In the example below, the feature branch is assumed to be called feat-xyz
.
Begin by creating a backup copy of the feature branch in case anything goes terribly wrong.
git checkout feat-xyz
git checkout -b feat-xyz-backup
git checkout feat-xyz
Next, consider squashing commits on the feature branch. If there are many commits, it is beneficial to squash them before trying to rebase with develop. There is a nice article on squashing commits into one using git, which has been very useful for consolidating commits on a long-lived modflow6 feature branch.
A quick and dirty way to squash without interactive rebase (as an alternative to the approach described in the article mentioned in the preceding paragraph) is a soft reset followed by an amended commit. First making a backup of the feature branch is strongly recommended before using this approach, as accidentally typing --hard
instead of --soft
will wipe out all your work.
git reset --soft <first new commit on the feature branch>
git commit --amend -m "consolidated commit message"
Once the commits on the feature branch have been consolidated, a force push to origin is recommended. This is not strictly required, but it can serve as an intermediate backup/checkpoint so the squashed branch state can be retrieved if rebasing fails. The following command will push feat-xyz
to origin.
git push origin feat-xyz --force
The --force
flag's short form is -f
.
Now that the commits on feat-xyz
have been consolidated, it is time to rebase with develop. If there are multiple commits in feat-xyz
that make changes, undo them, rename files, and/or move things around in subsequent commits, then there may be multiple sets of merge conflicts that will need to be resolved as the rebase works its way through the commit change sets. This is why it is beneficial to squash the feature commits before rebasing with develop.
To rebase with develop, make sure the feature branch is checked out and then type:
git rebase develop
If anything goes wrong during a rebase, there is the rebase --abort
command to unwind it.
If there are merge conflicts, they will need to be resolved before going forward. Once any conflicts are resolved, it may be worthwhile to rebuild the MODFLOW 6 program and run the smoke tests to ensure nothing is broken.
At this point, you will want to force push the updated feature branch to origin using the same force push command as before.
git push origin feat-xyz --force
Lastly, if you are satisfied with the results and confident the procedure went well, then you can delete the backup that you created at the start.
git branch -d feat-xyz-backup
This process can be repeated periodically to stay in sync with the develop branch and keep a clean commit history.
To deprecate a MODFLOW 6 input/output option in a DFN file:
- Add a new
deprecated x.y.z
attribute to the appropriate variable in the package DFN file, wherex.y.z
is the version the deprecation is introduced. Mention the deprecation prominently in the release notes. - If support for the deprecated option is removed (typically after at least 2 minor or major releases or 1 year), add a new
removed x.y.z
attribute to the variable in the DFN file, wherex.y.z
is the version in which support for the option was removed. The line containingdeprecated x.y.z
should not be deleted. Mention the removal prominently in the release notes. - Deprecated/removed attributes are not removed from DFN files but remain in perpetuity. The
doc/mf6io/mf6ivar/deprecations.py
script generates a markdown deprecation table which is converted to LaTeX bydoc/ReleaseNotes/mk_deprecations.py
for inclusion in the MODFLOW 6 release notes. Deprecations and removals should still be mentioned separately in the release notes, however.
To search for deprecations and removals in DFN files on a system with git
and standard Unix commands available:
git grep 'deprecated' -- '*.dfn' | awk '/^*.dfn:deprecated/'