cd spark-sql-perf
sbt 'set test in assembly := {}' clean assembly
Modify the parameters in script run.sh
before run it.
If you want to also save table metadata to hive. Use --enableHive true
for tpcds benchmark.
./run.sh
After the benchmark success. The result will be printed the console and save to $location/results.
-
create
hadoop jar juicefs-hadoop.jar com.juicefs.Main nnbench create -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local
This command will create 10000 empty files
-
open
hadoop jar juicefs-hadoop.jar com.juicefs.Main nnbench open -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local
This command will open 10000 files without reading data
-
rename
hadoop jar juicefs-hadoop.jar com.juicefs.Main nnbench rename -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local
-
delete
hadoop jar juicefs-hadoop.jar com.juicefs.Main nnbench delete -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local
The following command will start the MapReduce distributed task to test the metadata and IO performance. During the test, it is necessary to ensure that the cluster has sufficient resources to start the required map tasks.
-
create
hadoop jar juicefs-hadoop.jar com.juicefs.Main nnbench create -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench
10 map task, each has 10 threads, each thread create 1000 empty file. 100000 files in total
-
open
hadoop jar juicefs-hadoop.jar com.juicefs.Main nnbench open -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench
10 map task, each has 10 threads, each thread open 1000 file. 100000 files in total
-
rename
hadoop jar juicefs-hadoop.jar com.juicefs.Main nnbench rename -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench
10 map task, each has 10 threads, each thread rename 1000 file. 100000 files in total
-
delete
hadoop jar juicefs-hadoop.jar com.juicefs.Main nnbench delete -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench
10 map task, each has 10 threads, each thread delete 1000 file. 100000 files in total