Skip to content

Commit

Permalink
Merge pull request #1150 from bghira/feature/clip-evaluation
Browse files Browse the repository at this point in the history
add documentation updates
  • Loading branch information
bghira authored Nov 13, 2024
2 parents c9f8025 + 7088595 commit 7aecc52
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 10 deletions.
10 changes: 5 additions & 5 deletions OPTIONS.md
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ A lot of settings are instead set through the [dataloader config](/documentation
- **What**: Output image resolution, measured in pixels, or, formatted as: `widthxheight`, as in `1024x1024`. Multiple resolutions can be defined, separated by commas.
- **Why**: All images generated during validation will be this resolution. Useful if the model is being trained with a different resolution.

### `--validation_model_evaluator`
### `--evaluation_type`

- **What**: Enable CLIP evaluation of generated images during validations.
- **Why**: CLIP scores calculate the distance of the generated image features to the provided validation prompt. This can give an idea of whether prompt adherence is improving, though it requires a large number of validation prompts to have any meaningful value.
Expand Down Expand Up @@ -472,8 +472,8 @@ usage: train.py [-h] [--snr_gamma SNR_GAMMA] [--use_soft_min_snr]
[--model_card_note MODEL_CARD_NOTE]
[--model_card_safe_for_work] [--logging_dir LOGGING_DIR]
[--benchmark_base_model] [--disable_benchmark]
[--validation_model_evaluator {clip,none}]
[--pretrained_validation_model_name_or_path PRETRAINED_VALIDATION_MODEL_NAME_OR_PATH]
[--evaluation_type {clip,none}]
[--pretrained_evaluation_model_name_or_path pretrained_evaluation_model_name_or_path]
[--validation_on_startup] [--validation_seed_source {gpu,cpu}]
[--validation_torch_compile]
[--validation_torch_compile_mode {max-autotune,reduce-overhead,default}]
Expand Down Expand Up @@ -1243,12 +1243,12 @@ options:
--disable_benchmark By default, the model will be benchmarked on the first
batch of the first epoch. This can be disabled with
this option.
--validation_model_evaluator {clip,none}
--evaluation_type {clip,none}
Validations must be enabled for model evaluation to
function. The default is to use no evaluator, and
'clip' will use a CLIP model to evaluate the resulting
model's performance during validations.
--pretrained_validation_model_name_or_path PRETRAINED_VALIDATION_MODEL_NAME_OR_PATH
--pretrained_evaluation_model_name_or_path pretrained_evaluation_model_name_or_path
Optionally provide a custom model to use for ViT
evaluations. The default is currently clip-vit-large-
patch14-336, allowing for lower patch sizes (greater
Expand Down
1 change: 1 addition & 0 deletions documentation/quickstart/FLUX.md
Original file line number Diff line number Diff line change
Expand Up @@ -413,6 +413,7 @@ Currently, the lowest VRAM utilisation (9090M) can be attained with:
- Batch size: 1, zero gradient accumulation steps
- DeepSpeed: disabled / unconfigured
- PyTorch: 2.6 Nightly (Sept 29th build)
- Using `--quantize_via=cpu` to avoid outOfMemory error during startup on <=16G cards.

Speed was approximately 1.4 iterations per second on a 4090.

Expand Down
4 changes: 2 additions & 2 deletions helpers/configuration/cmd_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -1331,7 +1331,7 @@ def get_argument_parser():
),
)
parser.add_argument(
"--validation_model_evaluator",
"--evaluation_type",
type=str,
default=None,
choices=["clip", "none"],
Expand All @@ -1341,7 +1341,7 @@ def get_argument_parser():
)
)
parser.add_argument(
"--pretrained_validation_model_name_or_path",
"--pretrained_evaluation_model_name_or_path",
type=str,
default="openai/clip-vit-large-patch14-336",
help=(
Expand Down
6 changes: 3 additions & 3 deletions helpers/training/evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ def from_config(args):
"""Instantiate a ModelEvaluator from the training config, if set to do so."""
if not StateTracker.get_accelerator().is_main_process:
return None
if args.validation_model_evaluator is not None and args.validation_model_evaluator.lower() != "" and args.validation_model_evaluator.lower() != "none":
model_evaluator = model_evaluator_map[args.validation_model_evaluator]
return globals()[model_evaluator](args.pretrained_validation_model_name_or_path)
if args.evaluation_type is not None and args.evaluation_type.lower() != "" and args.evaluation_type.lower() != "none":
model_evaluator = model_evaluator_map[args.evaluation_type]
return globals()[model_evaluator](args.pretrained_evaluation_model_name_or_path)

return None

Expand Down

0 comments on commit 7aecc52

Please sign in to comment.