This is the repo for the egglog
tool accompanying the paper
"Better Together: Unifying Datalog and Equality Saturation"
(ACM DL, arXiv).
If you use this work, please use this citation.
See also the Python binding, which provides a bit more documentation: https://egglog-python.readthedocs.io/
There is a Zulip chat about egglog here: https://egraphs.zulipchat.com/#narrow/stream/375765-egglog
apt-get install make cargo
cargo install cargo-nextest
make all
cargo run [-f fact-path] [-naive] [--to-json] [--to-dot] [--to-svg] <files.egg>
or just
cargo run
for the REPL.
- The
--to-dot
command will save a graphviz dot file at the end of the program, replacing the.egg
extension with.dot
. - The
--to-svg
, which requires Graphviz to be installed, will save a graphviz svg file at the end of the program, replacing the.egg
extension with.svg
.
- @hatoo maintains an egglog-language extension in VS Code (just search for "egglog" in VS Code).
- @segeljakt maintains a Neovim plugin for egglog using tree-sitter.
To run the tests use make test
.
We run all of our "examples" as benchmarks in codspeed. These are in CI for every commit in main and for all PRs. It will run the examples with extra instrumentation added so that it can capture a single trace of the CPU interactions (src):
CodSpeed instruments your benchmarks to measure the performance of your code. A benchmark will be run only once and the CPU behavior will be simulated. This ensures that the measurement is as accurate as possible, taking into account not only the instructions executed but also the cache and memory access patterns. The simulation gives us an equivalent of the CPU cycles that includes cache and memory access.
Since many of the shorter running benchmarks have unstable timings due to non deterministic performance (like in the memory allocator), we "ignore" them in codspeed. That way, we still capture their performance, but their timings don't show up in our reports by default.
We use 50ms as our cutoff currently, any benchmarks shorter than that are ignored. This number was selected to try to ignore any benchmarks with have changes > 1% when they haven't been modified. Note that all the ignoring is done manually, so if you add another example that's short, an admin on the codspeed project will need to manually ignore it.
One way to profile egglog is to use samply. Here's how you can use it:
# install samply
cargo install --locked samply
# build a profile build which includes debug symbols
cargo build --profile profiling
# run the egglog file and profile
samply record ./target/profiling/egglog tests/extract-vec-bench.egg
# [optional] run the egglog file without logging or printing messages, which can help reduce the stdout
# when you are profiling extracting a large expression
env RUST_LOG=error samply record ./target/profiling/egglog --dont-print-messages tests/extract-vec-bench.egg
To view documentation, run cargo doc --open
.
TODO migrate the following documentation to cargo doc:
Signed 64-bit integers supporting these primitives:
+ - * / % ; arithmetic
& | ^ << >> not-i64 ; bit-wise operations
< > <= >= ; comparisons
min max log2
to-f64
to-string
64-bit floating point numbers supporting these primitives:
+ - * / % ; arithmetic
< > <= >= ; comparisons
min max neg
to-i64
to-string
A map from a key type to a value type supporting these primitives:
empty
insert
get
not-contains
contains
set-union
set-diff
set-intersect
map-remove
Rational numbers (fractions) with 64-bit precision for numerator and denominator with these primitives:
+ - * / ; arithmetic
min max neg abs floor ceil round
rational ; construct from a numerator and denominator
numer denom ; get numerator and denominator
pow log sqrt
< > <= >= ; comparisons
These primitives are only defined when the result itself is a pure rational.
Use double quotes to get a quote: "Foo "" Bar"
is Foo " Bar
.
No primitives defined.