| Author: | Edmond La Chance, Vincent Porta-Scarta, Sylvain Hallé | | Contact: | edmond.la-chance@uqac.ca | | Version: | 3.1 | | Date: | 2021-02-16 |
This project contains a benchmark aimed at comparing multiple combinatorial test generation tools on the same set of problem instances. In a nutshell, combinatorial test case generation consists of:
- a number n of parameters noted p1, ... pn
- each parameter can take a number of predefined values; the problem is called uniform if all parameters have the same number v of possible values
- an interaction strength t
A test case sets a value to each parameter; a test suite is a set of test cases. The goal of combinatorial test generation is to produce the smallest test suite such that, for any set of t parameters, all value combinations are present in at least one test case.
Multiple tools have been developed to generate such test suites, and the aim of this benchmark is to compare two new algorithms, based on graph reductions, to the set of existing and freely-available generators. For each test generation problem, the two important factors that are measured are:
- the size of the test suite produced by each tool (the smaller the better)
- the time taken to obtain this test suite (again, the smaller the better)
The data generated by this benchmark are part of the experimental results presented in the following publication:
- E. La Chance, S. Hallé. (2020). Extended Combinatorial Test Case Generation by Graph Reductions. Submitted to Software Testing, Verification and Reliability.
An earlier version of these experiments has been the subject of another publication:
- S. Hallé, E. La Chance, S. Gaboury. (2015). Graph Methods for Generating Test Cases with Universal and Existential Constraints. Proc. ICTSS 2015: 55-70. DOI: 10.1007/978-3-319-25945-1_4
This archive contains an instance of LabPal, an environment for running experiments on a computer and collecting their results in a user-friendly way. The author of this archive has set up a set of experiments, which typically involve running scripts on input data, processing their results and displaying them in tables and plots. LabPal is a library that wraps around these experiments and displays them in an easy-to-use web interface. The principle behind LabPal is that all the necessary code, libraries and input data should be bundled within a single self-contained JAR file, such that anyone can download and easily reproduce someone else's experiments. Detailed instructions can be found on the LabPal website, [https://liflab.github.io/labpal]
The benchmark is designed to compare the following tools:
- DSATUR, a home implementation of a graph coloring algorithm in C++
hitting-set
, a hypergraph vertex cover algorithm- A forked version of QICT
- A stand-alone version of Tcases 1.3.0
- Jenny, compiled with GCC from February 5, 2005 version
- AllPairs. A runnable JAR was created out of the hefty archive that contains the program. The program is not dated and has no version number; we retrieved it on January 17th, 2015.
- ACTS version 3.1
- A forked version of CASA 1.1b
- Some other tools are Linux 32-bit executables (Jenny, QICT, the main DSATUR program, CASA); you'll need to recompile them if you want to use them on other systems.
In order to run LabPal, you need to have Java properly installed. Java can be
freely downloaded and installation instructions are easy to find on the web.
If you want to see any plots associated to the experiments, you need to have
GnuPlot installed and available from the command line
by typing gnuplot
.
This archive should contain a single runnable JAR file; suppose it is called
my-lab.jar
. To start the lab and use its web interface, type at the command
line:
java -jar my-lab.jar
You should see something like this:
LabPal 2.8 - A versatile environment for running experiments
(C) 2014-2017 Laboratoire d'informatique formelle
Université du Québec à Chicoutimi, Canada
Please visit http://localhost:21212/index to run this lab
Hit Ctrl+C in this window to stop
Open your web browser, and type http://localhost:21212/index
in the address
bar. This should lead you to the main page of LabMate's web control panel.
(Note that the machine running LabPal does not need to have a web browser.
You can open a browser in another machine, and replace localhost
by the IP
address of the former.)
The main page should give you more details about the actual experiments that this lab contains. Here is how you typically use the LabPal web interface.
- Go to the Experiments page.
- Select some experiments in the list by clicking on the corresponding checkbox.
- Click on the "Add to assistant" button to queue these experiments
- Go to the Assistant page
- Click on the "Start" button. This will launch the execution of each experiment one after the other.
- At any point, you can look at the results of the experiments that have run so
far. You can do so by:
- Going to the Plots or the Tables page and see the plots and tables created for this lab being updated in real time
- Going back to the list of experiments, clicking on one of them and get the detailed description and data points that this experiment has generated
- Once the assistant is done, you can export any of the plots and tables to a file, or the raw data points by using the Export button in the Status page.
Please refer to the LabPal website or to the Help page within the web interface for more information about LabPal's functionalities.
The LabPal library was written by Sylvain Hallé, Associate Professor at Université du Québec à Chicoutimi, Canada. However, the experiments contained in this specific lab instance and the results they produce are the sole responsibility of their author.