CompilerGym v0.2.0
This release adds two new compiler optimization problems to CompilerGym: GCC command line flag optimization and CUDA loop nest optimization.
- [GCC] A new
gcc-v0
environment, authored by @hughleat, exposes the command line flags of GCC as a reinforcement learning environment. GCC is a production-grade compiler for C and C++ used throughout industry. The environment provides several datasets and a large, high dimensional action space that works on several GCC versions. For further details check out the reference documentation. - [loop_tool] A new
loop_tool-v0
environment, authored by @bwasti, provides an experimental intermediate representation of n-dimensional data computation that can be lowered to both CPU and GPU backends. This provides a reinforcement learning environment for manipulating nests of loop computations to maximize throughput. For further details check out the reference documentation.
Other highlights of this release include:
- [Docker] Published a chriscummins/compiler_gym docker image that can be used to run CompilerGym services in standalone isolated containers (#424).
- [LLVM] Fixed a bug in the experimental
Runtime
observation space that caused observations to slow down over time (#398). - [LLVM] Added a new utility module to compute observations from bitcodes (#405).
- Overhauled the continuous integration services to reduce computational requirements by 59.4% while increasing test coverage (#392).
- Improved error reporting if computing an observation fails (#380).
- Changed the return type of
compiler_gym.random_search()
to aCompilerEnv
(#387). - Numerous other bug fixes and improvements.
Many thanks to code contributors: @thecoblack, @bwasti, @hughleat, and @sahirgomez1!