Skip to content

v0.9.8.3 - essential fixes and improvements

Compare
Choose a tag to compare
@bghira bghira released this 18 Aug 22:36
· 801 commits to release since this release
da4bc81

What's Changed

that woman you've probably seen so many times!

General

  • Non-BF16 capable optimisers removed in favour of a series of new Optimi options
  • new crop_aspect option closest that uses crop_aspect_buckets as a list of options
  • fewer images are discarded, minimum image size isn't set by default for you any longer
  • better behaviour with mixed datasets, more equally sampling large and small sets
    • caveat dreambooth training now probably wants --data_backend_sampling=uniform instead of auto-weighting
  • multi-caption fixes, it was always using the first caption before (whoops)
  • TF32 now enabled by default for users with configure.py
  • New arguments for --custom_transformer_model_name_or_path to use a flat repository or local dir containing just the transformer model
  • InvernVL captioning script contributed by @frankchieng
  • ability to change constant learning rate on resume
  • fix SDXL controlnet training, allowing it to work with quanto
  • DeepSpeed fixes, caveat broken validations

Flux

  • New LoRA targets ai-toolkit and context-ffs, with context-ffs behaving more like text encoder training
  • New LoRA training resumption support via --init_lora
  • LyCORIS support
  • Novel attention masking implementation via --flux_attention_masked_training thanks to @AmericanPresidentJimmyCarter (#806)
  • Schnell --flux_fast_schedule fixed (still not great)

Pull Requests

New Contributors

Full Changelog: v0.9.8.2...v0.9.8.3