Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CTest: split checksum analysis from test analysis, expose arguments #5456

Open
wants to merge 21 commits into
base: development
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 30 additions & 9 deletions Docs/source/developers/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -140,8 +140,8 @@ A new test can be added by adding a corresponding entry in ``CMakeLists.txt`` as
1 # dims
2 # nprocs
inputs_test_1d_laser_acceleration # inputs
analysis.py # analysis
diags/diag1000100 # output (plotfile)
"analysis.py diags/diag1000100" # analysis
"analysis_default_regression.py --path diags/diag1000100 --plotfile" # checksum
OFF # dependency
)

Expand All @@ -154,8 +154,8 @@ A new test can be added by adding a corresponding entry in ``CMakeLists.txt`` as
2 # dims
2 # nprocs
inputs_test_2d_laser_acceleration_picmi.py # inputs
analysis.py # analysis
diags/diag1000100 # output (plotfile)
"analysis.py diags/diag1000100" # analysis
"analysis_default_regression.py --path diags/diag1000100 --plotfile" # checksum
OFF # dependency
)

Expand All @@ -168,14 +168,14 @@ A new test can be added by adding a corresponding entry in ``CMakeLists.txt`` as
3 # dims
2 # nprocs
inputs_test_3d_laser_acceleration_restart # inputs
analysis_default_restart.py # analysis
diags/diag1000100 # output (plotfile)
"analysis_default_restart.py diags/diag1000100" # analysis
"analysis_default_regression.py --path diags/diag1000100 --plotfile" # checksum
test_3d_laser_acceleration # dependency
)

Note that the restart has an explicit dependency, namely it can run only provided that the original test, from which the restart checkpoint files will be read, runs first.

* A more complex example. Add the **PICMI test** ``test_rz_laser_acceleration_picmi``, with custom command-line arguments ``--test`` and ``dir``, and openPMD time series output:
* A more complex example. Add the **PICMI test** ``test_rz_laser_acceleration_picmi``, with custom command-line arguments ``--test`` and ``dir``, openPMD time series output, and custom command line arguments for the checksum comparison:

.. code-block:: cmake

Expand All @@ -184,8 +184,8 @@ A new test can be added by adding a corresponding entry in ``CMakeLists.txt`` as
RZ # dims
2 # nprocs
"inputs_test_rz_laser_acceleration_picmi.py --test --dir 1" # inputs
analysis.py # analysis
diags/diag1/ # output (openPMD time series)
"analysis.py diags/diag1/" # analysis
"analysis_default_regression.py --path diags/diag1/ --openpmd --skip-particles --rtol 1e-7" # checksum
OFF # dependency
)

Expand All @@ -196,6 +196,27 @@ The shared input parameters can be collected in a "base" input file that can be

If the new test is added in a new directory that did not exist before, please add the name of that directory with the command ``add_subdirectory`` in `Physics_applications/CMakeLists.txt <https://github.com/ECP-WarpX/WarpX/tree/development/Examples/Physics_applications/CMakeLists.txt>`__ or `Tests/CMakeLists.txt <https://github.com/ECP-WarpX/WarpX/tree/development/Examples/Tests/CMakeLists.txt>`__, depending on where the new test directory is located.

If not already present, the default regression analysis script ``analysis_default_regression.py`` in the examples above must be linked from `Examples/analysis_default_regression.py <https://github.com/ECP-WarpX/WarpX/blob/development/Examples/analysis_default_regression.py>`__, by executing once the following command from the test directory:

.. code-block:: bash

ln -s ../../analysis_default_regression.py analysis_default_regression.py

Here is the help message of the default regression analysis script, including usage and list of available options and arguments:

.. code-block:: bash

usage: analysis_default_regression.py [-h] [--path PATH] (--plotfile | --openpmd) [--rtol RTOL] [--skip-fields]
[--skip-particles]
options:
-h, --help show this help message and exit
--path PATH path to output file(s)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can automatic discovery be used here instead of relying on the arguments? The checksum script could determine whether the output is a plotfile or openpmd. Also, the checksum would presumably always be done on the last diagnostic file and this could be automatically found. Then the path here would only be the top level diag directory.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, automating some of this was something I was hoping to do as well. Would you have a suggestion on how to automatically distinguish between plotfile and openpmd output?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One way would be to check for the Header file which would signify a plotfile diagnostic.

Copy link
Member Author

@EZoni EZoni Dec 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed something to see if this works:

    # set args.format automatically
    try:
        yt.load(args.path)
    except Exception:
        try:
            OpenPMDTimeSeries(args.path)
        except Exception:
            print("Could not open the file as a plotfile or an openPMD time series")
        else:
            args.format = "openpmd"
    else:
        args.format = "plotfile"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good.

--plotfile output format is plotfile
--openpmd output format is openPMD
--rtol RTOL relative tolerance to compare checksums
--skip-fields skip fields when comparing checksums
--skip-particles skip particles when comparing checksums

Naming conventions for automated tests
--------------------------------------

Expand Down
73 changes: 59 additions & 14 deletions Examples/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ endif()
# dims: 1,2,RZ,3
# nprocs: 1 or 2 (maybe refactor later on to just depend on WarpX_MPI)
# inputs: inputs file or PICMI script, WarpX_MPI decides w/ or w/o MPI
# analysis: analysis script, always run without MPI
# output: output file(s) to analyze
# analysis: custom test analysis command, always run without MPI
# checksum: default regression analysis command (checksum benchmark)
# dependency: name of base test that must run first
#
function(add_warpx_test
Expand All @@ -31,7 +31,7 @@ function(add_warpx_test
nprocs
inputs
analysis
output
checksum
dependency
)
# cannot run MPI tests w/o MPI build
Expand Down Expand Up @@ -72,14 +72,25 @@ function(add_warpx_test
separate_arguments(ANALYSIS_LIST UNIX_COMMAND "${analysis}")
list(GET ANALYSIS_LIST 0 ANALYSIS_FILE)
cmake_path(SET ANALYSIS_FILE "${CMAKE_CURRENT_SOURCE_DIR}/${ANALYSIS_FILE}")
# TODO Enable lines below to handle command-line arguments
#list(LENGTH ANALYSIS_LIST ANALYSIS_LIST_LENGTH)
#if(ANALYSIS_LIST_LENGTH GREATER 1)
# list(SUBLIST ANALYSIS_LIST 1 -1 ANALYSIS_ARGS)
# list(JOIN ANALYSIS_ARGS " " ANALYSIS_ARGS)
#else()
# set(ANALYSIS_ARGS "")
#endif()
list(LENGTH ANALYSIS_LIST ANALYSIS_LIST_LENGTH)
if(ANALYSIS_LIST_LENGTH GREATER 1)
list(SUBLIST ANALYSIS_LIST 1 -1 ANALYSIS_ARGS)
list(JOIN ANALYSIS_ARGS " " ANALYSIS_ARGS)
else()
set(ANALYSIS_ARGS "")
endif()

# get checksum script and optional command-line arguments
separate_arguments(CHECKSUM_LIST UNIX_COMMAND "${checksum}")
list(GET CHECKSUM_LIST 0 CHECKSUM_FILE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking about ways to make this simpler. Instead of having a soft link each place where the default analysis script is used, can this check if CHECKSUM_FILE is equal to analysis_default_regression.py and put in the appropriate direct path? Perhaps use some keyword like "DEFAULT" instead of the script name?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that would be nice, but when Axel and I set up ctest in the first place we did not manage to make it work with the direct paths (there were several attempts and then we turned to soft links). Especially, I think Axel found issues with running ctest from within an IDE. We could rivisit once he is back, but I would not try this here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking further, instead of writing out the full command to do the benchmark check, could the arguments to the analysis script be added as arguments to this function? In most cases, default values could be used so most CMakeLists.txt files wouldn't need any extra arguments (especially with the automatic discovery I suggest above). Then the rtol and skip-fields and skip-particles could be added via variable arguments pairs, like this

rtol 1.e-5

Copy link
Member Author

@EZoni EZoni Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was another design choice that we made with Axel initially.

We wanted to keep the number of arguments of add_warpx_test as little as possible, especially given that we cannot use keywords like in Python (e.g., add_warpx_test(..., rtol=1e-6, ...)).

Another issue I see is that we would be treating the arguments to the custom analysis script (which cannot be predicted in advance) differently than the arguments to the default analysis script. It might be confusing to see a rtol argument and not know for sure whether it refers to the custom analysis script or to the default one, given that the interface of the custom analysis script is up to the developer who added it.

If you feel strongly about this, I would recommend that we wait until Axel is back and discuss with him further.

Copy link
Member Author

@EZoni EZoni Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another comment is that we wanted to display commands that developers could copy/paste to try things locally, whenever that is needed (e.g., for debugging purposes). This came out of discussions with @aeriforme, and I think it would be valuable in general, as opposed to passing arguments to add_warpx_test when they are actually arguments read/used by the analysis scripts.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel strongly about this, just brain storming ideas. I see that having the command to do the benchmark check written out is convenient for debugging.

BTW, you can imitate keyword arguments though it's a bit kludgy. You can use variable arguments and scan through them. If one has a value of the argument name, then take the next argument as the value.

cmake_path(SET CHECKSUM_FILE "${CMAKE_CURRENT_SOURCE_DIR}/${CHECKSUM_FILE}")
list(LENGTH CHECKSUM_LIST CHECKSUM_LIST_LENGTH)
if(CHECKSUM_LIST_LENGTH GREATER 1)
list(SUBLIST CHECKSUM_LIST 1 -1 CHECKSUM_ARGS)
list(JOIN CHECKSUM_ARGS " " CHECKSUM_ARGS)
else()
set(CHECKSUM_ARGS "")
endif()

# Python test?
set(python OFF)
Expand Down Expand Up @@ -175,25 +186,52 @@ function(add_warpx_test

# test analysis
if(analysis)
# for argparse, do not pass command-line arguments as one quoted string
separate_arguments(ANALYSIS_ARGS UNIX_COMMAND "${ANALYSIS_ARGS}")
add_test(
NAME ${name}.analysis
COMMAND
${THIS_Python_SCRIPT_EXE} ${ANALYSIS_FILE}
${output}
${THIS_Python_SCRIPT_EXE}
${ANALYSIS_FILE}
${ANALYSIS_ARGS}
WORKING_DIRECTORY ${THIS_WORKING_DIR}
)
# test analysis depends on test run
set_property(TEST ${name}.analysis APPEND PROPERTY DEPENDS "${name}.run")
# FIXME Use helper function to handle Windows exceptions
set(PYTHONPATH "$ENV{PYTHONPATH}:${CMAKE_PYTHON_OUTPUT_DIRECTORY}")
# add paths for custom Python modules
set(PYTHONPATH "${PYTHONPATH}:${WarpX_SOURCE_DIR}/Regression/Checksum")
set(PYTHONPATH "${PYTHONPATH}:${WarpX_SOURCE_DIR}/Regression/PostProcessingUtils")
set(PYTHONPATH "${PYTHONPATH}:${WarpX_SOURCE_DIR}/Tools/Parser")
set(PYTHONPATH "${PYTHONPATH}:${WarpX_SOURCE_DIR}/Tools/PostProcessing")
set_property(TEST ${name}.analysis APPEND PROPERTY ENVIRONMENT "PYTHONPATH=${PYTHONPATH}")
endif()

# checksum analysis
if(checksum)
# for argparse, do not pass command-line arguments as one quoted string
separate_arguments(CHECKSUM_ARGS UNIX_COMMAND "${CHECKSUM_ARGS}")
add_test(
NAME ${name}.checksum
COMMAND
${THIS_Python_SCRIPT_EXE}
${CHECKSUM_FILE}
${CHECKSUM_ARGS}
WORKING_DIRECTORY ${THIS_WORKING_DIR}
)
# test analysis depends on test run
set_property(TEST ${name}.checksum APPEND PROPERTY DEPENDS "${name}.run")
if(analysis)
# checksum analysis depends on test analysis
set_property(TEST ${name}.checksum APPEND PROPERTY DEPENDS "${name}.analysis")
endif()
# FIXME Use helper function to handle Windows exceptions
set(PYTHONPATH "$ENV{PYTHONPATH}:${CMAKE_PYTHON_OUTPUT_DIRECTORY}")
# add paths for custom Python modules
set(PYTHONPATH "${PYTHONPATH}:${WarpX_SOURCE_DIR}/Regression/Checksum")
set_property(TEST ${name}.checksum APPEND PROPERTY ENVIRONMENT "PYTHONPATH=${PYTHONPATH}")
endif()

# CI: remove test directory after run
if(WarpX_TEST_CLEANUP)
add_test(
Expand All @@ -206,6 +244,10 @@ function(add_warpx_test
# test cleanup depends on test analysis
set_property(TEST ${name}.cleanup APPEND PROPERTY DEPENDS "${name}.analysis")
endif()
if(checksum)
# test cleanup depends on test analysis
set_property(TEST ${name}.cleanup APPEND PROPERTY DEPENDS "${name}.checksum")
endif()
endif()

# Do we depend on another test?
Expand All @@ -215,6 +257,9 @@ function(add_warpx_test
if(analysis)
set_property(TEST ${name}.run APPEND PROPERTY DEPENDS "${dependency}.analysis")
endif()
if(checksum)
set_property(TEST ${name}.run APPEND PROPERTY DEPENDS "${dependency}.checksum")
endif()
if(WarpX_TEST_CLEANUP)
# do not clean up dependency test before current test is completed
set_property(TEST ${dependency}.cleanup APPEND PROPERTY DEPENDS "${name}.cleanup")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ add_warpx_test(
3 # dims
2 # nprocs
inputs_test_3d_beam_beam_collision # inputs
analysis_default_openpmd_regression.py # analysis
diags/diag1/ # output
OFF # analysis
"analysis_default_regression.py --path diags/diag1/" # checksum
OFF # dependency
)
label_warpx_test(test_3d_beam_beam_collision slow)

This file was deleted.

24 changes: 12 additions & 12 deletions Examples/Physics_applications/capacitive_discharge/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ add_warpx_test(
1 # dims
2 # nprocs
"inputs_base_1d_picmi.py --test --pythonsolver" # inputs
analysis_1d.py # analysis
diags/diag1000050 # output
"analysis_1d.py" # analysis
"analysis_default_regression.py --path diags/diag1000050" # checksum
OFF # dependency
)

Expand All @@ -16,8 +16,8 @@ add_warpx_test(
1 # dims
2 # nprocs
"inputs_base_1d_picmi.py --test --dsmc" # inputs
analysis_dsmc.py # analysis
diags/diag1000050 # output
"analysis_dsmc.py" # analysis
"analysis_default_regression.py --path diags/diag1000050" # checksum
OFF # dependency
)

Expand All @@ -26,19 +26,19 @@ add_warpx_test(
2 # dims
2 # nprocs
inputs_test_2d_background_mcc # inputs
analysis_default_regression.py # analysis
diags/diag1000050 # output
OFF # analysis
"analysis_default_regression.py --path diags/diag1000050" # checksum
OFF # dependency
)

# FIXME: can we make this a single precision for now?
# FIXME: can we make this single precision for now?
#add_warpx_test(
# test_2d_background_mcc_dp_psp # name
# 2 # dims
# 2 # nprocs
## inputs_test_2d_background_mcc_dp_psp # inputs
# analysis_default_regression.py # analysis
# diags/diag1000050 # output
# inputs_test_2d_background_mcc_dp_psp # inputs
# OFF # analysis
# "analysis_default_regression.py --path diags/diag1000050" # checksum
# OFF # dependency
#)

Expand All @@ -47,7 +47,7 @@ add_warpx_test(
2 # dims
2 # nprocs
inputs_test_2d_background_mcc_picmi.py # inputs
analysis_2d.py # analysis
diags/diag1000050 # output
OFF # analysis
"analysis_default_regression.py --path diags/diag1000050 --rtol 5e-3" # checksum
OFF # dependency
)
12 changes: 0 additions & 12 deletions Examples/Physics_applications/capacitive_discharge/analysis_1d.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,8 @@

# Copyright 2022 Modern Electron, David Grote

import os
import sys

import numpy as np

sys.path.insert(1, "../../../../warpx/Regression/Checksum/")
from checksumAPI import evaluate_checksum

# fmt: off
ref_density = np.array([
1.27989677e+14, 2.23601330e+14, 2.55400265e+14, 2.55664972e+14,
Expand Down Expand Up @@ -51,9 +45,3 @@
density_data = np.load("ion_density_case_1.npy")
print(repr(density_data))
assert np.allclose(density_data, ref_density)

# compare checksums
evaluate_checksum(
test_name=os.path.split(os.getcwd())[1],
output_file=sys.argv[1],
)
21 changes: 0 additions & 21 deletions Examples/Physics_applications/capacitive_discharge/analysis_2d.py

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,8 @@

# 2023 TAE Technologies

import os
import sys

import numpy as np

sys.path.insert(1, "../../../../warpx/Regression/Checksum/")
from checksumAPI import evaluate_checksum

# fmt: off
ref_density = np.array([
1.27942709e+14, 2.23579371e+14, 2.55384387e+14, 2.55660663e+14,
Expand Down Expand Up @@ -51,9 +45,3 @@
density_data = np.load("ion_density_case_1.npy")
print(repr(density_data))
assert np.allclose(density_data, ref_density)

# compare checksums
evaluate_checksum(
test_name=os.path.split(os.getcwd())[1],
output_file=sys.argv[1],
)
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ add_warpx_test(
1 # dims
2 # nprocs
inputs_test_1d_fel # inputs
analysis_fel.py # analysis
diags/diag_labframe # output
"analysis_fel.py diags/diag_labframe" # analysis
"analysis_default_regression.py --path diags/diag_labframe" # checksum
OFF # dependency
)
11 changes: 0 additions & 11 deletions Examples/Physics_applications/free_electron_laser/analysis_fel.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,12 @@
lab-frame diagnostics and boosted-frame diagnostics.
"""

import os
import sys

import numpy as np
from openpmd_viewer import OpenPMDTimeSeries
from scipy.constants import c, e, m_e

sys.path.insert(1, "../../../../warpx/Regression/Checksum/")
from checksumAPI import evaluate_checksum

# Physical parameters of the test
gamma_bunch = 100.6
Bu = 0.5
Expand Down Expand Up @@ -136,10 +132,3 @@ def extract_peak_E_boost(iteration):
lambda_radiation_lab = lambda_radiation_boost / (2 * gamma_boost)
lambda_expected = lambda_u / (2 * gamma_boost**2)
assert abs(lambda_radiation_lab - lambda_expected) / lambda_expected < 0.01

# compare checksums
evaluate_checksum(
test_name=os.path.split(os.getcwd())[1],
output_file=sys.argv[1],
output_format="openpmd",
)
Loading
Loading