For benchmarking.
The idea is to:
- Have a plan of what to run
- Divide the plan into sub-tasks
- Recover when things go wrong (not implemented)
See the documentation for the details.
To start benchmarking <TARGET>
with the default settings,
or continue an interrupted run:
$ raco gtp-measure <TARGET>
To resume a previously-stopped task:
$ raco gtp-measure --resume <DATA-DIR>
For more:
$ raco gtp-measure --help
https://github.com/nuprl/gradual-typing-performance?path=tools/benchmark-run