Skip to content

Ideas for testing in FOQUS

Ludovico Bianchi edited this page Dec 10, 2020 · 2 revisions

Ideas and notes on improving and extending tests in FOQUS

Introduction

Prompted by discussion during and after the unofficial "Intro to testing in FOQUS" session in Dec 2020, this page collects various notes and ideas on possible ways to improve and extend tests in FOQUS.

Everyone is welcome to add, modify, comment on, or otherwise contribute to these.

Ideas

A. Developing test plans from existing workflows

  • Collect and standardize test workflow that are currently in use (e.g. as part of the testing done for a new release) into test plans.
    • A test plan is a high-level, overarching document describing all types of test that can be used to test a codebase: automated, manual, unit, integration, system or otherwise
    • The core idea is to create test routines that, even though they are not automated, are well-defined and easy to share and refer to within the development team
      • e.g. "Test B3 fails on Windows with the latest release candidate"
    • In addition to being valuable for the process of testing itself, it can help improving the actual test coverage indirectly by making it considerably easier to develop unit tests using the test plan as a reference
    • Typically, it includes the following components, which conceptually serve the same function as their counterpart in a unit test:
      • A general description of the functionality being tested (i.e., what the test covers)
      • The setup, i.e. the steps needed to recreate the test environment, including (if applicable) any inputs required
      • The test itself, i.e. the steps needed to perform the functionality being tested
      • The assertions, i.e. a description of the environment after the test, including, if applicable, any outputs produced
      • The teardown, i.e. the steps to perform (if any) to restore the environment to its previous state as it was before the test (e.g. cleanup temporary files, close/restart the GUI)
    • Optionally, it can also include additional information such as the estimated time needed to run the test

B. Identify critical parts in the codebase

  • Inspect the codebase and identify the portions of the code that are most critical for the core functionality or the validity of the results
    • This is valuable for increasing the test coverage in at least two different ways:
      • Parts of the code classified as highly important will be given priority for the development of new unit tests; by "targeting" critical portions of the codebase, for a given amount of developer effort, the test coverage will increase more efficiently
      • Performing a review of existing code can help "unearth" parts of the codebase that are rarely used or have been superseded by newer functionality; if these are removed, the codebase will be smaller and the test coverage will increase
    • If applicable, information from the test coverage report can be used to spot portions of the code that are not covered by unit tests

C. Document properties of core models/algorithms

  • Gather information on input and output data that can be used to validate the results of calculations programmatically
    • The main idea is to document the expected behavior of both the algorithms and their implementation
    • This can improve the coverage both directly and indirectly:
      • Directly, by providing a list of known inputs and outputs that can be used in unit tests
      • Indirectly, by extending our tests in a way that's complementary to unit tests, i.e. enabling to catch errors arising from cases where the code works as intended, but the algorithm doesn't
    • This includes, but is not limited to, e.g.:
      • Valid ranges of input parameters
      • Statistical properties or invariants of the output
      • Inputs that are invalid or known to result in errors, and a description of the expected behavior