Zigache is an efficient caching library built in Zig, offering customizable cache eviction policies for various application needs.
Important
Zigache is currently in early development and follows Zig's latest build in the master branch.
Zigache offers a rich set of features to designed to meet various caching needs:
- Multiple Eviction Algorithms:
- W-TinyLFU | TinyLFU: A Highly Efficient Cache Admission Policy
- S3-FIFO | FIFO queues are all you need for cache eviction
- SIEVE | SIEVE is Simpler than LRU: an Efficient Turn-Key Eviction Algorithm for Web Caches
- LRU | Least Recently Used
- FIFO | First-In-First-Out
- Extensive Configuration Options:
- Configurable cache size with pre-allocation for performance tuning
- Ability to fine-tune cache policies (e.g., TinyLFU, S3FIFO)
- Time-To-Live (TTL) support to expire cache entries
- Thread safety and sharding settings for concurrent environments
- Adjustable max load factor for the cache
- Heavy Testing and Benchmarking for stability and performance under various workloads
To use Zigache in your project, follow these steps:
-
Run this command in your project's root directory:
zig fetch --save git+https://github.com/jaxron/zigache.git
-
In your
build.zig
, add:pub fn build(b: *std.Build) void { // Options const target = b.standardTargetOptions(.{}); const optimize = b.standardOptimizeOption(.{}); // Build + const zigache = b.dependency("zigache", .{ + .target = target, + .optimize = optimize, + }).module("zigache"); const exe = b.addExecutable(.{ .name = "your-project", .root_source_file = b.path("src/main.zig"), .target = target, .optimize = optimize, }); + exe.root_module.addImport("zigache", zigache); b.installArtifact(exe); const run_cmd = b.addRunArtifact(exe); run_cmd.step.dependOn(b.getInstallStep()); const run_step = b.step("run", "Run the program"); run_step.dependOn(&run_cmd.step); }
-
Now you can import and use Zigache in your code like this:
const std = @import("std"); const Cache = @import("zigache").Cache; pub fn main() !void { var gpa: std.heap.GeneralPurposeAllocator(.{}) = .init; defer _ = gpa.deinit(); const allocator = gpa.allocator(); // Create a cache with string keys and values var cache: Cache([]const u8, []const u8, .{}) = try .init(allocator, .{ .cache_size = 1, .policy = .SIEVE, }); defer cache.deinit(); // your code... }
Explore the usage scenarios in our examples directory:
To run an example:
zig build [example-id]
zig build 01
Zigache offers flexible configuration options to adjust the cache to your needs:
var cache: Cache([]const u8, []const u8, .{
.thread_safety = true, // Enable thread safety for multi-threaded environments
.ttl_enabled = true, // Enable Time-To-Live (TTL) functionality
.max_load_percentage = 60, // Set maximum load factor for the cache (60% occupancy)
}) = try .init(allocator, .{
.cache_size = 10000, // Maximum number of items the cache can store
.pool_size = 1000, // Pre-allocated nodes to optimize performance
.shard_count = 16, // Number of shards for concurrent access handling
.policy = .SIEVE, // Eviction policy in use
});
For more detailed information, refer to the full documentation.
This benchmark uses a Zipfian distribution, run on an Intel® Core™ i7-8700 CPU, using commit 7a12b1f
of this library.
Note
These results are not conclusive, as performance depends on workload and environment. These benchmarks are comparing eviction policies within this library, and not comparisons with other languages or libraries. You can customize the benchmarks using various flags. For details, run zig build -h
.
Single Threaded (zipf 0.9, 10m keys)
zig build bench -Doptimize=ReleaseFast
or
zig build bench -Doptimize=ReleaseFast -Dreplay=true -Dshards=1 -Dthreads=1 -Dauto='20:50000' -Dzipf='0.9' -Dkeys=10000000 -Dduration=10000
Single Threaded (zipf 0.7, 10m keys)
zig build bench -Doptimize=ReleaseFast -Dzipf='0.7'
or
zig build bench -Doptimize=ReleaseFast -Dreplay=true -Dshards=1 -Dthreads=1 -Dauto='20:50000' -Dzipf='0.7' -Dkeys=10000000 -Dduration=10000
Zigache is in its early stages. Our main priority is on implementing features, with performance improvements as a secondary priority. Here are some things we have planned for the future:
- 🧪 Improved benchmarking suite
- ⚙️ Runtime-configurable API
- 📦 Batch operations support
- 📊 Metrics and monitoring
- 🔄 Configuration to adjust eviction policies
- 🔓 Lock-free data structures
- 📚 More extensive examples
- ⚡️ Async (non-blocking) I/O operations
💡 We value your input! Have suggestions for our roadmap? Feel free to open an issue or start a discussion.
This project is licensed under the MIT License. See the LICENSE.md file for details.
Is Zigache production-ready?
Zigache is currently in early development. Although it has been tested and benchmarked, it may not yet be suitable for all production environments. If you decide to use it in a production setting, please report any problems you encounter.
Which eviction policy should I choose?
It depends on your use case:
- SIEVE: Best for high throughput and high hit rate. (recommended)
- TinyLFU: Best for customizability and high hit rate.
- S3FIFO: Decent throughput with a decent hit rate.
- LRU: Reliable for standard needs but falls behind compared to other options.
- FIFO: High throughput, but lowest hit rates.
Can I use Zigache in a multi-threaded environment?
Yes, Zigache supports thread-safe operations and sharding. Sharding reduces contention and there are plans to improve performance further in the future.
What type of keys does Zigache support?
Zigache supports most key types like strings, integers, structs, arrays, pointers, enums, and optionals. However, floats are not supported due to precision issues.
How can I contribute to Zigache?
We welcome contributions! Please follow the Zig Style Guide and ensure that your changes include appropriate tests.