You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
genlz77 dictionary should have a runtime sizing option,
Don't statically allocate RAM for tables used per call or at least have configuration options to allocate on heap and built them dynamically with init / close bookends. This is essential for use the library on IoT devices.
Consider the option of using an N-way hash e.g. N×4 as this will improve compression ratios for a small runtime penalty. 1024×4 performs better than 4096×1.
PS Here are some initial results on one of my test images for % size reduction compared to the 2048/1 case.
The time (real) varied from 4mSec on the 128/1 case up to 5mSec on the 2048/8. By way of comparison my M×N brute force aglo achieved a -17.60% size reduction at a (real) runtime of 125mSec. Not sure if you think that it's worth it.
@pfalcon, one specific suggestion: this line where you realloc the output buffer every 64 bytes dominates your run time. I use a simple iterative allocator where I initially a guess compression factor (6x) to prealloc the output buffer based on the input file size. If and when this is exhausted, I then use the actual compression factor so far to extrapolate the final size and do the next realloc. OK this might alloc the few extra Kb, but a few reallocs is a lot better than 4K reallocs on a 256K file.
The text was updated successfully, but these errors were encountered: