Skip to content

Latest commit

 

History

History
95 lines (72 loc) · 3.19 KB

README.workflow.org

File metadata and controls

95 lines (72 loc) · 3.19 KB

The TIPSY Workflow

TIPSY was designed for those who would like to do scalability studies. It requires a pre-configured environment consisting of a Tester and a SUT (System Under Test). TIPSY already offers numerous pipeline implementations on various platforms. Due to its modular infrastructure it is easy to implement additional pipelines or to support new platforms.

But, a typical TIPSY session involves only a small amount of user efforts. Namely, to provide a high-level description of the benchmarks: configuration of the environment, scaling parameters of the pipeline and configuration of the visualisation. The typical TIPSY workflow is the following.

  1. Install TIPSY on the Tester and on the SUT. Configure the network interfaces, passwordless SSH access. The following steps are done on the Tester.
  2. Create a directory for your scalability study. This folder will contain all of the files (configurations, traffic traces, figures) related to your study.
    mkdir my_bng_study cd my_bng_study
        
  3. Initialise a new high-level TIPSY configuration or copy your old one.

    TIPSY can generate an initial configuration containing all of the available parameters – in many cases it might contain more parameters than you would like to specify.

    tipsy init mgw
        
  1. Edit the main TIPSY configuration file according to your needs (e.g.: adjust pipeline parameters or SUT config). For details check the TIPSY configuration guide.

    It is a good practice to split your configuration file to multiple files. This way you can reuse e.g. Tester and SUT configuration among scalability studies.

    If your are unsure about the validity of your configuration file, TIPSY can check your configuration:

    tipsy validate main.json
        
  2. Generate the configuration for the individual test cases that make up the benchmark, that is, a separate test for all settings benchmark parameters, with each test case configuration placed into a separate directory. Plus a main Makefile that will execute the measurements.
    tipsy config
        

    This call will set the benchmark configuration from your JSON files, setting each parameter that was not explicitly specified there to a sane default value.

    Optionally, you can force TIPSY to override existing measurement configurations and results too (!) with the following command.

    tipsy config -f
        
  3. Let TIPSY do the cumbersome parts:
    • Generate sample traffic traces that will be fed to the SUT during the benchmark (this may take a while).
    • Run the benchmarks (this may take an even longer while).
    • Visualize benchmark results.
    make
        
  4. Finally, clean up the benchmark directory by removing all temporary files (pcaps, logs, etc.).
    tipsy clean