Skip to content
Michael Tryby edited this page Jan 24, 2020 · 3 revisions

Definitions

Application Programming Interface (API) -- A set of functions that allow creation of applications that access the library components features.

Adaptive Software Quality Management -- An iterative process where software quality is observed, deficiencies are identified and corrected, and software quality is improved over time.

Continuous Integration (CI) -- A software development practice where code is automatically built and tested continuously throughout the development process.

Component Testing -- Testing performed on individual software components prior to being integrated with other components.

Exploratory Testing -- Studying code and simultaneously reasoning about its design and operation while developing test cases to illustrate deficiencies.

Regression Testing -- Testing designed to confirm that the most recent code changes do not adversely impact existing software functionality.

Software Lifecycle -- A phased repeating cycle where software is designed, developed, released, in production, improved, and eventually retired.

Software Quality -- The degree to which the software meets requirements and is fit for its intended purpose.

Unit Testing -- Fine grained testing performed at the software unit level. A unit can be thought of as a small testable piece of source code, usually an individual function.

Introduction

Historically, EPANET quality control has been ad hoc and occurred in private. Because EPANET development was closed source and performed by one individual, quality assurance SOPs were informal, undocumented, and not communicated. The team developing EPANET is growing to encompass multiple organizations and individuals both inside and outside the Agency. Software development techniques and the development team itself are evolving. It is therefore necessary for EPANET's software testing systems to evolve to meet these new demands placed on the project. In the sections that follow major components of the testing system are described.

Component and Unit Testing

Component and unit tests are source files that get compiled into individual executables that automatically perform many rigorous checks on the EPANET Toolkit and Output libraries. They require Boost libraries to build and run. The component tests check the functionality of the EPANET Toolkit and Output APIs. There are also a small number of unit tests that target individual software objects that will be expanded going forward. The project is configured to register tests with a test runner as they are built to make executing them more convenient.

Both the component and unit tests have been written using the Boost Unit Testing Framework and other Boost libraries -- including Boost System, Thread, and File System -- to perform various checks. Boost libraries are high quality, freely available, open source, and fully documented.

Boost Download: https://www.boost.org/users/download/

Boost Documentation: https://www.boost.org/doc/libs/

Organization

Component and unit tests can be found in the epanet/tests folder along with any data they require to run. As is standard practice, the tests folder is arranged to mimic the hierarchy used to organize the source code to ease navigation. The test files are written in C++11 and prefixed with the word "test" followed by the name of the source file or other designation of the organizational unit under test.

The tests/outfile folder contains the component tests for the epanet-output library. The tests/shared folder contains the unit tests for individual software objects that will eventually be shared between the epanet-output and epanet-solver libraries. The tests/solver folder contains the component tests for the Toolkit API. There is one test file for each category of Toolkit API functions. Each file is organized as a test suite containing individual test cases. The build system is configured to combine Toolkit test suites into one test executable. Linking tests with the Boost Unit Test Framework is expensive. This strategy was adopted to minimize build time. The tests folder also contains test_net_builder.cpp and test_reent.cpp for testing the Toolkit API model element creation functions and thread reentrancy feature respectively.

Build and Execution

The CMake build system for the EPANET project has been configured to be modular. Each individual test executable is a target with its own build configuration file with the exception of the net builder and reentrancy tests whos build configuration is found in tests/CMakeLists.txt. The build system has been configured with a build option for building tests. When it is selected (-DBUILD_TESTS=ON) the required Boost libraries are found (Boost v1.67.0), and the test executables are built and registered with the CTest test runner. The default value for the test build option is off (-DBUILD_TESTS=OFF).

The make.cmd helper script has been written to perform these build steps and execute the tests locally using the test runner. To run it requires that CMake, Visual Studio Build Tools, and Boost are installed. Once the development environment has been configured the following simple command builds the project's component and unit tests in Debug configuration and executes them.

\> cd epanet
\epanet>scripts\make.cmd /t

Regression Testing

Sometimes when new features are developed or bugs are fixed, new bugs get inadvertently introduced. When this occurs it is referred to as a "regression." Regression testing is used to help alert developers when their current activities have affected the existing functionality of an application. Our current regression testing approach is to run a suite of EPANET input files with known "good" results and make comparisons to detect differences.

The EPANET project's regression testing framework requires a Python environment and several custom Python packages; the most important of which is nrtest. nrtest is an open source package for performing regression testing on scientific software and was designed for flexibility. It uses a simple plugin interface for comparison operations making it easy to modify for testing different scientific software packages. It is well designed, written, and documented; however, it is no longer being actively maintained by its developer. Our project has been relying on it for several years and found it to be dependable. In that time we have had contact with the developer and he has added minor features and fixed bugs at our request.

nrtest download: https://pypi.org/project/nrtest/

nrtest documentation: https://nrtest.readthedocs.io/en/latest/?badge=latest

Regression testing requires two additional Python packages -- nrtest-epanet and epanet.output. nrtest-epanet is a nrtest comparison operator extension written for EPANET. It depends on epanet.output a thin Python wrapper for the epanet-output library to read values from EPANET's custom binary output file format. nrtest-epanet is essentially an iterator that reads the results section of the output file in the exact order in which they are written during an EPANET simulation. The epanet-output library was written in C to reduce time associated with file IO operations. This design makes nrtest-epanet efficient for reading large binary files.

Regression testing currently requires Python 3.6 64 bit. Once Python has been installed it is easy to configure for regression testing using Python packaging system utility pip and the requirements file found in the epanet/scripts folder.

\>cd epanet
\epanet>pip install -r scripts\requirements-appveyor.txt

Organization

The suite of EPANET input files are stored in a repository separate from the main EPANET project named epanet-nrtestsuite to reduce clutter while maintaining configuration management on test files. Within this repository "releases" are used to hold benchmark archives containing the known "good" results. Versioning and build metadata are used to keep everything organized.

EPANET builds can potentially occur on multiple platforms -- Windows 32 and 64 bit, Linux, and MacOS. Benchmarks are currently platform specific. Software under active development is constantly changing. We use git commit hash to uniquely identify the version of the software being built. Lastly, any particular version of the software can be built many times with different build settings, so a build identifier is also useful. Therefore, three pieces of metadata are needed to uniquely identify a build; 1) the build platform, 2) the commit hash, and 3) a build identifier. The regression testing framework uses these three pieces of build metadata to uniquely identify a benchmark. This data is stored in the manifest file found in each benchmark archive. Release tags are used to access benchmarks. Each time a benchmark gets changed or updated a new release should be created. This way the latest release version tag can be used to easily retrieve the latest benchmark.

Local Execution

Running regression tests locally is a three step process. The first step builds runepanet.exe -- the software under test (SUT). The second step prepares the shell environment and stages the test and benchmark files. And the third step runs nrtest execute and compare commands. Helper scripts have been written to perform each of these steps to make running regression tests locally on Windows easy for developers.

\> cd epanet
\epanet>scripts\make.cmd
\epanet>scripts\before-nrtests.cmd
\epanet>scripts\run-nrtests.cmd

Continuous Integration

Regression testing is even more useful when running under Continuous Integration (CI) linked with the code repository. This way code can be checked for regressions during pull request review prior to being merged. Under CI the build and tests execute automatically on a remote build worker. Appveyor is a third party CI provider that is easily integrated with GitHub. For the EPANET project Appveyor has been configured to save a receipt with test results when regression tests pass or the SUT benchmark archives when they fail. This is useful for maintaining QA/QC records, inspection of results, debugging, and for keeping rolling development benchmarks updated.

Maintenance SOPs

See Wiki/Regression Testing SOPs