Leveraging Static Analysis to Accelerate Dynamic Race Detection in RMA Programs - Supplemental Material
This is supplemental material for the paper "Leveraging Static Analysis to Accelerate Dynamic Race Detection in RMA Programs" submitted to the C3PO workshop.
- classification_quality: Results of RMARaceBench of MUST-RMA with the different filters applied
- MUST: Source code of MUST-RMA
- RMAOptimizerPlugin: Static analysis passes that are presented in the paper
- performance_evaluation: Source codes, results, and plotting scripts of the performance evaluation
We used the set of test cases provided by RMARaceBench and extended them with a misc
category that contains several test cases that are challenging to be understood for static analysis tools (due to aliasing, nesting of functions etc.). The codes of the misc category are available at classification_quality/rmaracebench/MPIRMA/misc.
We ran the RMARaceBench tests in three variants: 1) without any optimization, 2) with BDX(10), 3) with BDX(∞). The classification quality results are available here:
- Results with no optimization applied
- Results for run with BDX(10),CLUSTER
- Results for run with BDX(∞),CLUSTER
If no optimization is applied, then MUST-RMA has a recall of 1 for the misc
category. For BDX(∞), it is 0.87, since function pointer aliasing is not detected. For BDX(10), it is 0.81, since the test cases with a deep nesting of pointer aliasing are not correctly detected. The precision for all variants is 1.
The results of the performance evaluation for the filter statistics for the different benchmarks are available at:
- performance_evaluation/benchmark_results/PRK_stencil/result/result.dat
- performance_evaluation/benchmark_results/PRK_transpose/result/result.dat
- performance_evaluation/benchmark_results/miniMD/result/result.dat
- performance_evaluation/benchmark_results/lulesh/result/result.dat
- performance_evaluation/benchmark_results/BT-RMA/result/result.dat
- performance_evaluation/benchmark_results/miniVite/result/result.dat
The results of the performance evaluation with MUST-RMA for the different benchmarks are available at:
- performance_evaluation/benchmark_results/PRK_stencil/result/filterstats_result.dat
- performance_evaluation/benchmark_results/PRK_transpose/result/filterstats_result.dat
- performance_evaluation/benchmark_results/miniMD/result/filterstats_result.dat
- performance_evaluation/benchmark_results/lulesh/result/filterstats_result.dat
- performance_evaluation/benchmark_results/BT-RMA/result/filterstats_result.dat
- performance_evaluation/benchmark_results/miniVite/result/filterstats_result.dat
Our benchmark suite is based on the JUBE benchmarking environment and can be used to reproduce our experiments. The setup can be found at performance_evaluation/rma_codes.
For all setups, we ran
jube run <benchmarkname>.xml --tag tsan-opt M filterstats ignorelist
to get our results. The plotting scripts are available in performance_evaluation and the plots itself in performance_evaluation/plots.
The sources of different codes are available at
- PRK_Stencil: performance_evaluation/rma_codes/benchmarks/PRK_stencil/prk
- PRK_Transpose: performance_evaluation/rma_codes/benchmarks/PRK_transpose
- miniMD (RMA port): performance_evaluation/rma_codes/benchmarks/miniMD/miniMD
- LULESH (RMA port): performance_evaluation/rma_codes/benchmarks/lulesh/lulesh
- BT-RMA (RMA port): performance_evaluation/rma_codes/benchmarks/BT-RMA/npb
- miniVite: performance_evaluation/rma_codes/benchmarks/miniVite
The input graph for miniVite was taken from https://www.cise.ufl.edu/research/sparse/matrices/Schenk/nlpkkt240.html.