Skip to content

Releases: kmgowda/SBK

Storage Benchmark Kit Version 0.73

19 Apr 04:33
276366e
Compare
Choose a tag to compare

This Release version 0.73 includes

  • SBK configuration properties extension implementation.
  • Latency percentile calculation optimizations to O(Max Latency)
  • Bug Fixes
  • SBK-APIs update.

How to Use:

  1. unzip (untar) the file sbk.tar
    For example : tar -xvf sbk.tar -C ./

  2. For performance bench-marking use this extracted binary file : "</SBK diectory/> /bin/sbk"
    below is the help output:

usage: sbk
 -class <arg>        Benchmark Driver Class,
                     Available Drivers [BookKeeper, ConcurrentQ, File,
                     HDFS, Kafka, Pravega, Pulsar, RabbitMQ, RocketMQ]
 -context <arg>      Prometheus Metric context;default context:
                     8080/metrics; 'no' disables the  metrics
 -flush <arg>        Each Writer calls flush after writing <arg> number of
                     of events(records); Not applicable, if both writers
                     and readers are specified
 -help               Help message
 -readers <arg>      Number of readers
 -records <arg>      Number of records(events) if 'time' not specified;
                     otherwise, Maximum records per second by writer(s)
                     and/or Number of records per reader
 -size <arg>         Size of each message (event or record)
 -throughput <arg>   if > 0 , throughput in MB/s
                     if 0 , writes 'records'
                     if -1, get the maximum throughput
 -time <arg>         Number of seconds this SBK runs (24hrs by default)
 -version            Version
 -writers <arg>      Number of writers

Storage Benchmark Kit Version 0.72

11 Apr 06:21
Compare
Choose a tag to compare
Pre-release

This Release version 0.72 includes

  • SBK configuration properties implementation.
  • Bug Fixes
  • SBK-APIs update.

How to Use:

  1. unzip (untar) the file sbk.tar
    For example : tar -xvf sbk.tar -C ./

  2. For performance bench-marking use this extracted binary file : "</SBK diectory/> /bin/sbk"
    below is the help output:

usage: sbk
 -class <arg>        Benchmark Driver Class,
                     Available Drivers [BookKeeper, ConcurrentQ, File,
                     HDFS, Kafka, Pravega, Pulsar, RabbitMQ, RocketMQ]
 -context <arg>      Prometheus Metric context;default context:
                     8080/metrics; 'no' disables the  metrics
 -flush <arg>        Each Writer calls flush after writing <arg> number of
                     of events(records); Not applicable, if both writers
                     and readers are specified
 -help               Help message
 -readers <arg>      Number of readers
 -records <arg>      Number of records(events) if 'time' not specified;
                     otherwise, Maximum records per second by writer(s)
                     and/or Number of records per reader
 -size <arg>         Size of each message (event or record)
 -throughput <arg>   if > 0 , throughput in MB/s
                     if 0 , writes 'records'
                     if -1, get the maximum throughput
 -time <arg>         Number of seconds this SBK runs (24hrs by default)
 -version            Version
 -writers <arg>      Number of writers

Storage Benchmark Kit Version 0.71

27 Mar 11:50
5ccd4f4
Compare
Choose a tag to compare

This Release version 0.71 includes

  • The performance bench-marking of HDFS, Bookkeeper, RabbitMQ and RocketMQ.
  • SBK-APIs update.

How to Use:

  1. unzip (untar) the file sbk.tar
    For example : tar -xvf sbk.tar -C ./

  2. For performance bench-marking use this extracted binary file : "</SBK diectory/> /bin/sbk"
    below is the help output:

usage: sbk
 -class <arg>        Benchmark Driver Class,
                     Available Drivers [BookKeeper, ConcurrentQ, File,
                     HDFS, Kafka, Pravega, Pulsar, RabbitMQ, RocketMQ]
 -context <arg>      Prometheus Metric context;default context:
                     8080/metrics; 'no' disables the  metrics
 -flush <arg>        Each Writer calls flush after writing <arg> number of
                     of events(records); Not applicable, if both writers
                     and readers are specified
 -help               Help message
 -readers <arg>      Number of readers
 -records <arg>      Number of records(events) if 'time' not specified;
                     otherwise, Maximum records per second by writer(s)
                     and/or Number of records per reader
 -size <arg>         Size of each message (event or record)
 -throughput <arg>   if > 0 , throughput in MB/s
                     if 0 , writes 'records'
                     if -1, get the maximum throughput
 -time <arg>         Number of seconds this SBK runs (24hrs by default)
 -version            Version
 -writers <arg>      Number of writers

Storage Benchmark Kit Version 0.7

01 Mar 06:11
Compare
Choose a tag to compare

This Release version 0.7 includes

  • The SBK benchmark output is directed to grafana graphs through prometheus and micrometer.io.
  • SBK-APIs update.

How to Use:

  1. unzip (untar) the file sbk.tar
    For example : tar -xvf sbk.tar -C ./

  2. For performance bench-marking use this extracted binary file : "</SBK diectory/> /bin/sbk"
    below is the help output:

usage: sbk
 -class <arg>        Benchmark Driver Class,
                     Available Drivers [ConcurrentQ, File, Kafka, Pravega,
                     Pulsar]
 -context <arg>      Prometheus Metric context;default context:
                     8080/metrics; 'no' disables the  metrics
 -flush <arg>        Each Writer calls flush after writing <arg> number of
                     of events(records); Not applicable, if both writers
                     and readers are specified
 -help               Help message
 -readers <arg>      Number of readers
 -records <arg>      Number of records(events) if 'time' not specified;
                     otherwise, Maximum records per second by writer(s)
                     and/or Number of records per reader
 -size <arg>         Size of each message (event or record)
 -throughput <arg>   if > 0 , throughput in MB/s
                     if 0 , writes 'records'
                     if -1, get the maximum throughput
 -time <arg>         Number of seconds this SBK runs (24hrs by default)
 -version            Version
 -writers <arg>      Number of writers

Storage Benchmark Kit Version 0.61

17 Feb 13:57
f39ed88
Compare
Choose a tag to compare

This Release version 0.61 includes

  • The Benchmark Interface , Writer and Reader Interfaces are independent of data types used for bench-marking. Developer can specify the type of data for bench-marking.
  • Support for bench-marking of mounted File system
  • Support for bench-marking of Java Concurrent Queues.
  • SBK-APIs update.

How to Use:

  1. unzip (untar) the file sbk.tar
    For example : tar -xvf sbk.tar -C ./

  2. For performance bench-marking use this extracted binary file : "</SBK diectory/> /bin/sbk"
    below is the help output:

usage: sbk
 -class <arg>        Benchmark Driver Class,
                     Available Drivers [ConcurrentQ, File, Kafka, Pravega,
                     Pulsar]
 -csv <arg>          CSV file to record write/read latencies
 -flush <arg>        Each Writer calls flush after writing <arg> number of
                     of events(records); Not applicable, if both writers
                     and readers are specified
 -help               Help message
 -readers <arg>      Number of readers
 -records <arg>      Number of records(events) if 'time' not specified;
                     otherwise, Maximum records per second by writer(s)
                     and/or Number of records per reader
 -size <arg>         Size of each message (event or record)
 -throughput <arg>   if > 0 , throughput in MB/s
                     if 0 , writes 'events'
                     if -1, get the maximum throughput
 -time <arg>         Number of seconds this SBK runs (24hrs by default)
 -version            Version
 -writers <arg>      Number of writers

Storage Benchmark Kit Version 0.6

01 Feb 08:41
042b4c7
Compare
Choose a tag to compare

This version 0.6 is a major release in terms of

  • Inclusion of Apache Pulsar distributed streaming storage driver for performance bench-marking
  • Publishing the SBK APIs
  • Using SBK APIs , developer can add their storage device (driver or client) for performance bench-marking.

How to Use:

  1. unzip (untar) the file sbk.tar
    For example : tar -xvf sbk.tar -C ./

  2. For performance bench-marking use this extracted binary file : "</SBK diectory/> /bin/sbk"
    below is the help output:

<SBK diectory>$ ./bin/sbk  -help
usage: sbk
 -class <arg>        Benchmark Driver Class,
                     Available Drivers [Kafka, Pravega, Pulsar]
 -csv <arg>          CSV file to record write/read latencies
 -flush <arg>        Each Writer calls flush after writing <arg> number of
                     of events(records); Not applicable, if both writers
                     and readers are specified
 -help               Help message
 -readers <arg>      Number of readers
 -records <arg>      Number of records(events) if 'time' not specified;
                     otherwise, Maximum records per second by writer(s)
                     and/or Number of records per reader
 -size <arg>         Size of each message (event or record)
 -throughput <arg>   if > 0 , throughput in MB/s
                     if 0 , writes 'events'
                     if -1, get the maximum throughput
 -time <arg>         Number of seconds this SBK runs (24hrs by default)
 -version            Version
 -writers <arg>      Number of writers
  1. Apache pulsar performance bench-marking

you can check the Apache pulsar bench-marking command line arguments with the -help output as follows

<SBK directory>/run/sbk/bin/sbk  -class Pulsar -help
usage: sbk -class Pulsar
 -ackQuorum <arg>       AckQuorum (default: 1)
 -admin <arg>           Admin URI, required to create the partitioned
                        topic
 -broker <arg>          Broker URI
 -class <arg>           Benchmark Driver Class,
                        Available Drivers [Kafka, Pravega, Pulsar]
 -cluster <arg>         Cluster name (optional parameter)
 -csv <arg>             CSV file to record write/read latencies
 -deduplication <arg>   Enable or Disable Deduplication; by default
                        disabled
 -ensembleSize <arg>    EnsembleSize (default: 1)
 -flush <arg>           Each Writer calls flush after writing <arg> number
                        of of events(records); Not applicable, if both
                        writers and readers are specified
 -help                  Help message
 -partitions <arg>      Number of partitions of the topic (default: 1)
 -readers <arg>         Number of readers
 -records <arg>         Number of records(events) if 'time' not specified;
                        otherwise, Maximum records per second by writer(s)
                        and/or Number of records per reader
 -size <arg>            Size of each message (event or record)
 -threads <arg>         io threads per Topic; by default (writers +
                        readers)
 -throughput <arg>      if > 0 , throughput in MB/s
                        if 0 , writes 'events'
                        if -1, get the maximum throughput
 -time <arg>            Number of seconds this SBK runs (24hrs by default)
 -topic <arg>           Topic name
 -version               Version
 -writeQuorum <arg>     WriteQuorum (default: 1)
 -writers <arg>         Number of writers

Apache Pulsar performance bench-marking examples

NOTE : Below example on executed with Apache pulsar standalone cluster

  1. Writers Bench-marking (Burst mode f)
./bin/sbk -class Pulsar -admin http://localhost:8080 -broker tcp://localhost:6650 -topic topic-km-1  -partitions 5  -writers 1  -size 100  -time 60

Writing     657454 records,  131464.5 records/sec,   12.54 MB/sec,     9.4 ms avg latency,   186.0 ms max latency
Writing     739374 records,  147815.7 records/sec,   14.10 MB/sec,     7.1 ms avg latency,   114.0 ms max latency
Writing     778979 records,  155764.6 records/sec,   14.85 MB/sec,     6.4 ms avg latency,    81.0 ms max latency
Writing     772386 records,  154446.3 records/sec,   14.73 MB/sec,     6.8 ms avg latency,   118.0 ms max latency
Writing     761702 records,  152309.9 records/sec,   14.53 MB/sec,     6.9 ms avg latency,   118.0 ms max latency
Writing     701513 records,  140274.5 records/sec,   13.38 MB/sec,     9.9 ms avg latency,   410.0 ms max latency
Writing     775071 records,  154983.2 records/sec,   14.78 MB/sec,     6.7 ms avg latency,   116.0 ms max latency
Writing     792580 records,  158484.3 records/sec,   15.11 MB/sec,     6.3 ms avg latency,    83.0 ms max latency
Writing     772949 records,  154558.9 records/sec,   14.74 MB/sec,     6.7 ms avg latency,    83.0 ms max latency
Writing     778950 records,  155758.8 records/sec,   14.85 MB/sec,     6.5 ms avg latency,    97.0 ms max latency
Writing (Total)     8133358 records,  148410.8 records/sec,   14.15 MB/sec,     7.1 ms avg latency,   410.0 ms max latency
Writing Latencies 5 ms 50th, 6 ms 75th, 11 ms 95th, 71 ms 99th, 114 ms 99.9th, 402 ms 99.99th.
  1. Readers Bench marking
./bin/sbk -class Pulsar -admin http://localhost:8080 -broker tcp://localhost:6650 -topic topic-km-1  -partitions 5  -readers 1  -size 100  -time 60

Reading     113725 records,   22740.5 records/sec,    2.17 MB/sec,     0.0 ms avg latency,   186.0 ms max latency
Reading     132604 records,   26515.5 records/sec,    2.53 MB/sec,     0.0 ms avg latency,     7.0 ms max latency
Reading     135958 records,   27186.2 records/sec,    2.59 MB/sec,     0.0 ms avg latency,    10.0 ms max latency
Reading     137325 records,   27459.5 records/sec,    2.62 MB/sec,     0.0 ms avg latency,     1.0 ms max latency
Reading     134015 records,   26797.6 records/sec,    2.56 MB/sec,     0.0 ms avg latency,     3.0 ms max latency
Reading     135575 records,   27109.6 records/sec,    2.59 MB/sec,     0.0 ms avg latency,     3.0 ms max latency
Reading     136086 records,   27211.8 records/sec,    2.60 MB/sec,     0.0 ms avg latency,     7.0 ms max latency
Reading     137191 records,   27432.7 records/sec,    2.62 MB/sec,     0.0 ms avg latency,     9.0 ms max latency
Reading     137605 records,   27515.5 records/sec,    2.62 MB/sec,     0.0 ms avg latency,     1.0 ms max latency
Reading     137094 records,   27413.3 records/sec,    2.61 MB/sec,     0.0 ms avg latency,     6.0 ms max latency
Reading     137122 records,   27418.9 records/sec,    2.61 MB/sec,     0.0 ms avg latency,    15.0 ms max latency
Reading (Total)     1509579 records,   26286.0 records/sec,    2.51 MB/sec,     0.0 ms avg latency,   186.0 ms max latency
Reading Latencies 0 ms 50th, 0 ms 75th, 0 ms 95th, 1 ms 99th, 1 ms 99.9th, 1 ms 99.99th.

Data Storage Benchmark Tool Version 0.5

29 Dec 13:08
9694577
Compare
Choose a tag to compare

Data Storage Benchmark Tool

This is the first release of Bench-marking tool for Pravega and Apache Kafka streaming Storage systems.
by default this tool uses the Pravega version 0.5 and Kafka version 2.3.0.

How to Use:

  1. unzip (untar) the file DSB.tar
    For example : tar -xvf DSB.tar -C ./

  2. For performance bench-marking use this extract binary file : "/DSB/bin/pravega-benchmark" .
    below is the help output:

<dir>/DSB$ ./run/DSB/bin/DSB  -help
 -consumers <arg>               Number of consumers
 -controller <arg>              Controller URI
 -events <arg>                  Number of events/records if 'time' not
                                specified;
                                otherwise, Maximum events per second by
                                producer(s) and/or Number of events per
                                consumer
 -flush <arg>                   Each producer calls flush after writing
                                <arg> number of of events/records; Not
                                applicable, if both producers and
                                consumers are specified
 -fork <arg>                    Use Fork join Pool
 -help                          Help message
 -kafka <arg>                   Kafka Benchmarking
 -producers <arg>               Number of producers
 -readcsv <arg>                 CSV file to record read latencies
 -recreate <arg>                If the stream is already existing, delete
                                and recreate the same
 -scope <arg>                   Scope name
 -segments <arg>                Number of segments
 -size <arg>                    Size of each message (event or record)
 -stream <arg>                  Stream name
 -throughput <arg>              if > 0 , throughput in MB/s
                                if 0 , writes 'events'
                                if -1, get the maximum throughput
 -time <arg>                    Number of seconds the code runs
 -transactionspercommit <arg>   Number of events before a transaction is
                                committed
 -writecsv <arg>                CSV file to record write latencies