Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reworked build scripts and added JDK11 support #126

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 45 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,9 @@ features, e.g., per-transaction-type latency and throughput logs.
```
+ Run the following commands to build:
```bash
ant bootstrap
ant resolve
ant build
mvn clean install
```
+ Copy and unpack `target/tpcc.tar.gz` file on client machine

## Setup of the Database
The DB connection details should be as follows:
Expand All @@ -45,38 +44,63 @@ The workload descriptor works the same way as it does in the upstream branch and


## Running the Benchmark
A utility script (./tpccbenchmark) is provided for running the benchmark. The options are
A utility script `./tpccbenchmark` is provided for running the benchmark. The options are

```
-c,--config <arg> [required] Workload configuration file
--clear <arg> Clear all records in the database for this
benchmark
--create <arg> Initialize the database for this benchmark
--execute <arg> Execute the benchmark workload
-h,--help Print this help
--histograms Print txn histograms
--load <arg> Load data using the benchmark's data loader
-o,--output <arg> Output file (default System.out)
--runscript <arg> Run an SQL script
-s,--sample <arg> Sampling window
-v,--verbose Display Messages
-c,--config <arg> Workload configuration file
[default: config/workload_all.xml]
--clear <arg> Clear all records in the database
for this benchmark
--create <arg> Initialize the database for this
benchmark
--create-sql-procedures <arg> Creates the SQL procedures
--dir <arg> Directory containing the csv files
--enable-foreign-keys <arg> Whether to enable foregin keys
--execute <arg> Execute the benchmark workload
-gpc,--geopartitioned-config <arg> GeoPartitioning configuration file
[default:
config/geopartitioned_workload.xml]
-h,--help Print this help
--histograms Print txn histograms
-im,--interval-monitor <arg> Throughput Monitoring Interval in
milliseconds
--initial-delay-secs <arg> Delay in seconds for starting the
benchmark
--load <arg> Load data using the benchmark's data
loader
--loaderthreads <arg> Number of loader threads (default
10)
--merge-results <arg> Merge results from various output
files
--nodes <arg> comma separated list of nodes
(default 127.0.0.1)
--num-connections <arg> Number of connections used
--output-raw <arg> Output raw data
--output-samples <arg> Output sample data
--start-warehouse-id <arg> Start warehouse id
--total-warehouses <arg> Total number of warehouses across
all executions
--vv Output verbose execute results
--warehouses <arg> Number of warehouses (default 10)
--warmup-time-secs <arg> Warmup time in seconds for the
benchmark
```

## Example
The following command for example initiates a tpcc database (--create=true --load=true) and a then run a workload as described in config/workload_all.xml file. The results (latency, throughput) are summarized and written into two files: outputfile.res (aggregated) and outputfile.raw (detailed):
First step is to create tables and indexes, can be done by calling following command

```
./tpccbenchmark -c config/workload_all.xml --create=true --load=true --execute=true -s 300 -o outputfile
./tpccbenchmark --nodes $COMMA_SEPARATED_IPS --create true --vv
```

Since data loading can be a lengthy process, one could first create a and populate a database which can be reused for multiple experiments:
Since data loading can be a lengthy process,one can be used to populate a database which can be reused for multiple experiments:

```
./tpccbenchmark -c config/workload_all.xml --create=true --load=true
./tpccbenchmark --nodes $COMMA_SEPARATED_IPS --load true --warehouses $WAREHOUSES --loaderthreads $LOADER_THREADS --vv
```

Then running an experiment could be simply done with the following command on a fresh or used database.

```
./tpccbenchmark -c config/workload_all.xml --execute=true -s 300 -o outputfile
./tpccbenchmark--nodes $COMMA_SEPARATED_IPS --execute true --warehouses $WAREHOUSES --warmup-time-secs 30 --vv
```
50 changes: 50 additions & 0 deletions assembly/bin.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
<assembly>

<formats>
<format>tar.gz</format>
</formats>

<includeBaseDirectory>false</includeBaseDirectory>

<fileSets>

<!-- configs -->
<fileSet>
<directory>config</directory>
<outputDirectory>tpcc/config</outputDirectory>
<includes>
<include>*.xml</include>
</includes>
</fileSet>

<!-- execute file, LICENSE, README.md and log4j.properies file-->
<fileSet>
<outputDirectory>tpcc</outputDirectory>
<includes>
<include>tpccbenchmark</include>
<include>LICENSE</include>
<include>README.md</include>
<include>log4j.properties</include>
</includes>
</fileSet>

<!-- oltbenchmark jar file-->
<fileSet>
<directory>target</directory>
<outputDirectory>tpcc/libs</outputDirectory>
<includes>
<include>oltpbench-*.jar</include>
</includes>
</fileSet>

<!-- dependencies except hsqldb (provided scope, excluded from assembly) -->
<fileSet>
<directory>target/libs</directory>
<outputDirectory>tpcc/libs</outputDirectory>
<includes>
<include>*.jar</include>
</includes>
</fileSet>

</fileSets>
</assembly>
158 changes: 0 additions & 158 deletions build.xml

This file was deleted.

10 changes: 0 additions & 10 deletions classpath.sh

This file was deleted.

53 changes: 0 additions & 53 deletions ivy.xml

This file was deleted.

10 changes: 0 additions & 10 deletions ivysettings.xml

This file was deleted.

Loading