Release v0.10.2
What's Changed
- Fix (QuantLayer): make bias for QuantLayer optional by @fabianandresgrob in #846
- Fix (examples/llm): set
group_size
only for groupwise quantization by @nickfraser in #853 - Fix (gpfq): updating input processing and L1-norm constraints for GPFA2Q by @i-colbert in #852
- ImageNet PTQ example fix by @Giuseppe5 in #863
- feat (gen/quantize): Added device flag to
quantize_model
by @nickfraser in #860 - Docs: update README for 0.10.2 release by @Giuseppe5 in #865
Full Changelog: v0.10.1...v0.10.2