-
Notifications
You must be signed in to change notification settings - Fork 31
ConcurrentLru Quickstart
Alex Peck edited this page Dec 1, 2023
·
16 revisions
ConcurrentLru is a thread-safe bounded size pseudo LRU.
ConcurrentLru is intended to be a drop in replacement for ConcurrentDictionary, but with the benefit of bounded size based on an LRU eviction policy.
The code samples below illustrate how to create an LRU then get/remove/update items:
int capacity = 128;
var lru = new ConcurrentLru<int, SomeItem>(capacity);
bool success1 = lru.TryGet(1, out var value);
var value1 = lru.GetOrAdd(1, (k) => new SomeItem(k));
var value2 = lru.GetOrAdd(1, (k, arg) => new SomeItem(k, arg), new Arg());
var value3 = await lru.GetOrAddAsync(0, (k) => Task.FromResult(new SomeItem(k)));
var value4 = await lru.GetOrAddAsync(0, (k, arg) => Task.FromResult(new SomeItem(k, arg)), new Arg());
bool success2 = lru.TryRemove(1); // remove item with key == 1
lru.Clear();
lru.Eviction.Policy.Value.Trim(1); // remove the coldest item
var item = new SomeItem(1);
bool success3 = lru.TryUpdate(1, item);
lru.AddOrUpdate(1, item);
Console.WriteLine(lru.Metrics.Value.HitRatio);
// enumerate keys
foreach (var k in lru.Keys)
{
Console.WriteLine(k);
}
// enumerate key value pairs
foreach (var kvp in lru)
{
Console.WriteLine($"{kvp.Key} {kvp.Value}");
}
// register event on item removed
lru.Events.Value.ItemRemoved += (source, args) => Console.WriteLine($"{args.Reason} {args.Key} {args.Value}");
Below is an example using all of the possible builder options:
var lru = new ConcurrentLruBuilder<int, Disposable>()
.AsAsyncCache()
.AsScopedCache()
.WithAtomicGetOrAdd()
.WithCapacity(3)
.WithMetrics()
.WithExpireAfterWrite(TimeSpan.FromSeconds(1))
.WithKeyComparer(StringComparer.OrdinalIgnoreCase)
.WithConcurrencyLevel(8)
.Build();
Builder Method | Description |
---|---|
AsAsyncCache | Build an IAsyncCache , the GetOrAdd method becomes GetOrAddAsync . |
AsScopedCache | Build an IScopedCache . IDisposable values are wrapped in a lifetime scope. Scoped caches return lifetimes that prevent values from being disposed until the calling code completes. |
WithAtomicGetOrAdd | Execute the cache's GetOrAdd method atomically, such that it is applied at most once per key. Other threads attempting to update the same key will be blocked until value factory completes. Incurs a small performance penalty. |
WithCapacity | Sets the maximum number of values to keep in the cache. If more items than this are added, the cache eviction policy will determine which values to remove. If omitted, the default capacity is 128. |
WithMetrics | Collect cache metrics, such as Hit rate. Metrics have a small performance penalty. |
WithExpireAfterAccess | Evict after a fixed duration since an entry's most recent read or write. |
WithExpireAfterWrite | Evict after a fixed duration since an entry's creation or most recent replacement. The underlying cache used is ConcurrentTLru . |
WithExpireAfter | Evict after a duration calculated for each item using the specified IExpiryCalculator . Expiry time is fully configurable, and may be set independently at creation, after a read and after a write.. |
WithKeyComparer | Use the specified equality comparison implementation to compare keys. If omitted the default comparer is EqualityComparer<K>.Default . |
WithConcurrencyLevel | Sets the estimated number of threads that will update the cache concurrently. If omitted, the default concurrency level is Environment.ProcessorCount . |
The table below summarizes which cache options are compatible with each eviction policy. Time-based expiry policies may not be combined. If the option is not listed it is supported in all variants of ConcurrentLru
.
Bounded Size (Default) | ExpireAfterWrite | ExpireAfterAccess | ExpireAfter | |
---|---|---|---|---|
Default | Supported | Supported | Supported | Supported |
WithAtomicGetOrAdd | Supported | Supported | Supported | Not Supported |
AsAsyncCache | Supported | Supported | Supported | Supported |
AsScopedCache | Supported | Supported | Supported | Not Supported |