diff --git a/scripts/benchmarks/README.md b/scripts/benchmarks/README.md index 170b647da4..0985536977 100644 --- a/scripts/benchmarks/README.md +++ b/scripts/benchmarks/README.md @@ -15,6 +15,22 @@ This will download and build everything in `~/benchmarks_workdir/` using the com The scripts will try to reuse the files stored in `~/benchmarks_workdir/`, but the benchmarks will be rebuilt every time. To avoid that, use `-no-rebuild` option. +## Running in CI + +The benchmarks scripts are used in a GitHub Actions worflow, and can be automatically executed on a preconfigured system against any Pull Request. + +![compute benchmarks](workflow.png "Compute Benchmarks CI job") + +To execute the benchmarks in CI, navigate to the `Actions` tab and then go to the `Compute Benchmarks` action. Here, you will find a list of previous runs and a "Run workflow" button. Upon clicking the button, you will be prompted to fill in a form to customize your benchmark run. The only mandatory field is the `PR number`, which is the identifier for the Pull Request against which you want the benchmarks to run. + +You can also include additional benchmark parameters, such as environment variables or filters. For a complete list of options, refer to `$ ./main.py --help`. + +Once all the required information is entered, click the "Run workflow" button to initiate a new workflow run. This will execute the benchmarks and then post the results as a comment on the specified Pull Request. + +By default, all benchmark runs are compared against `baseline`, which is a well-established set of the latest data. + +You must be a member of the `oneapi-src` organization to access these features. + ## Requirements ### Python diff --git a/scripts/benchmarks/workflow.png b/scripts/benchmarks/workflow.png new file mode 100644 index 0000000000..1db06cad9d Binary files /dev/null and b/scripts/benchmarks/workflow.png differ