Skip to content

Commit

Permalink
Merge pull request #816 from bghira/main
Browse files Browse the repository at this point in the history
lycoris updates
  • Loading branch information
bghira authored Aug 19, 2024
2 parents 2dc5a1f + d68eb61 commit 3172256
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 7 deletions.
8 changes: 5 additions & 3 deletions documentation/LYCORIS.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Background

[LyCORIS](https://github.com/KohakuBlueleaf/LyCORIS) is a wrapper for models that allows various methods of low-rank (LoRA) training, which allows you to finetune models while using less VRAM and produces smaller distributable weights.
[LyCORIS](https://github.com/KohakuBlueleaf/LyCORIS) is an extensive suite of parameter-efficient fine-tuning (PEFT) methods that allow you to finetune models while using less VRAM and produces smaller distributable weights.

## Using LyCORIS

Expand Down Expand Up @@ -68,9 +68,8 @@ transformer = FluxTransformer2DModel.from_pretrained(bfl_repo, subfolder="transf

lycoris_safetensors_path = 'pytorch_lora_weights.safetensors'
wrapper, _ = create_lycoris_from_weights(1.0, lycoris_safetensors_path, transformer)
wrapper.apply_to()
wrapper.merge_to() # using apply_to() will be slower.

wrapper.to(device, dtype=dtype)
transformer.to(device, dtype=dtype)

pipe = FluxPipeline(
Expand All @@ -95,4 +94,7 @@ with torch.inference_mode():
guidance_scale=3.5,
).images[0]
image.save('image.png')

# optionally, save a merged pipeline containing the LyCORIS baked-in:
pipe.save_pretrained('/path/to/output/pipeline')
```
8 changes: 4 additions & 4 deletions helpers/arguments.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ def parse_args(input_args=None):
default=4.0,
)
parser.add_argument(
'--flux_attention_masked_training',
"--flux_attention_masked_training",
action="store_true",
default=False,
help="Use attention masking while training flux.",
Expand Down Expand Up @@ -235,9 +235,9 @@ def parse_args(input_args=None):
)
parser.add_argument(
"--lora_type",
type=str,
choices=["Standard", "lycoris"],
default="Standard",
type=str.lower,
choices=["standard", "lycoris"],
default="standard",
help=(
"When training using --model_type=lora, you may specify a different type of LoRA to train here."
" Standard refers to training a vanilla LoRA via PEFT, lycoris refers to training with KohakuBlueleaf's library of the same name."
Expand Down

0 comments on commit 3172256

Please sign in to comment.