Accelerate 1.0.0 is here!
🚀 Accelerate 1.0 🚀
With accelerate
1.0, we are officially stating that the core parts of the API are now "stable" and ready for the future of what the world of distributed training and PyTorch has to handle. With these release notes, we will focus first on the major breaking changes to get your code fixed, followed by what is new specifically between 0.34.0 and 1.0.
To read more, check out our official blog here
Migration assistance
- Passing in
dispatch_batches
,split_batches
,even_batches
, anduse_seedable_sampler
to theAccelerator()
should now be handled by creating anaccelerate.utils.DataLoaderConfiguration()
and passing this to theAccelerator()
instead (Accelerator(dataloader_config=DataLoaderConfiguration(...))
) Accelerator().use_fp16
andAcceleratorState().use_fp16
have been removed; this should be replaced by checkingaccelerator.mixed_precision == "fp16"
Accelerator().autocast()
no longer accepts acache_enabled
argument. Instead, anAutocastKwargs()
instance should be used which handles this flag (among others) passing it to theAccelerator
(Accelerator(kwargs_handlers=[AutocastKwargs(cache_enabled=True)])
)accelerate.utils.is_tpu_available
should be replaced withaccelerate.utils.is_torch_xla_available
accelerate.utils.modeling.shard_checkpoint
should be replaced withsplit_torch_state_dict_into_shards
from thehuggingface_hub
libraryaccelerate.tqdm.tqdm()
no longer acceptsTrue
/False
as the first argument, and instead,main_process_only
should be passed in as a named argument
Multiple Model DeepSpeed Support
After long request, we finally have multiple model DeepSpeed support in Accelerate! (though it is quite early still). Read the full tutorial here, however essentially:
When using multiple models, a DeepSpeed plugin should be created for each model (and as a result, a separate config). a few examples are below:
Knowledge distillation
(Where we train only one model, zero3, and another is used for inference, zero2)
from accelerate import Accelerator
from accelerate.utils import DeepSpeedPlugin
zero2_plugin = DeepSpeedPlugin(hf_ds_config="zero2_config.json")
zero3_plugin = DeepSpeedPlugin(hf_ds_config="zero3_config.json")
deepspeed_plugins = {"student": zero2_plugin, "teacher": zero3_plugin}
accelerator = Accelerator(deepspeed_plugins=deepspeed_plugins)
To then select which plugin to be used at a certain time (aka when calling prepare
), we call `accelerator.state.select_deepspeed_plugin("name"), where the first plugin is active by default:
accelerator.state.select_deepspeed_plugin("student")
student_model, optimizer, scheduler = ...
student_model, optimizer, scheduler, train_dataloader = accelerator.prepare(student_model, optimizer, scheduler, train_dataloader)
accelerator.state.select_deepspeed_plugin("teacher") # This will automatically enable zero init
teacher_model = AutoModel.from_pretrained(...)
teacher_model = accelerator.prepare(teacher_model)
Multiple disjoint models
For disjoint models, separate accelerators should be used for each model, and their own .backward()
should be called later:
for batch in dl:
outputs1 = first_model(**batch)
first_accelerator.backward(outputs1.loss)
first_optimizer.step()
first_scheduler.step()
first_optimizer.zero_grad()
outputs2 = model2(**batch)
second_accelerator.backward(outputs2.loss)
second_optimizer.step()
second_scheduler.step()
second_optimizer.zero_grad()
FP8
We've enabled MS-AMP support up to FSDP. At this time we are not going forward with implementing FSDP support with MS-AMP, due to design issues between both libraries that don't make them inter-op easily.
FSDP
- Fixed FSDP auto_wrap using characters instead of full str for layers
- Re-enable setting state dict type manually
Big Modeling
- Removed cpu restriction for bnb training
What's Changed
- Fix FSDP auto_wrap using characters instead of full str for layers by @muellerzr in #3075
- Allow DataLoaderAdapter subclasses to be pickled by implementing
__reduce__
by @byi8220 in #3074 - Fix three typos in src/accelerate/data_loader.py by @xiabingquan in #3082
- Re-enable setting state dict type by @muellerzr in #3084
- Support sequential cpu offloading with torchao quantized tensors by @a-r-r-o-w in #3085
- fix bug in
_get_named_modules
by @faaany in #3052 - use the correct available memory API for XPU by @faaany in #3076
- fix
skip_keys
usage in forward hooks by @152334H in #3088 - Update README.md to include distributed image generation gist by @sayakpaul in #3077
- MAINT: Upgrade ruff to v0.6.4 by @BenjaminBossan in #3095
- Revert "Enable Unwrapping for Model State Dicts (FSDP)" by @SunMarc in #3096
- MS-AMP support (w/o FSDP) by @muellerzr in #3093
- [docs] DataLoaderConfiguration docstring by @stevhliu in #3103
- MAINT: Permission for GH token in stale.yml by @BenjaminBossan in #3102
- [docs] Doc sprint by @stevhliu in #3099
- Update image ref for docs by @muellerzr in #3105
- No more t5 by @muellerzr in #3107
- [docs] More docstrings by @stevhliu in #3108
- 🚨🚨🚨 The Great Deprecation 🚨🚨🚨 by @muellerzr in #3098
- POC: multiple model/configuration DeepSpeed support by @muellerzr in #3097
- Fixup test_sync w/ deprecated stuff by @muellerzr in #3109
- Switch to XLA instead of TPU by @SunMarc in #3118
- [tests] skip pippy tests for XPU by @faaany in #3119
- Fixup multiple model DS tests by @muellerzr in #3131
- remove cpu restriction for bnb training by @jiqing-feng in #3062
- fix deprecated
torch.cuda.amp.GradScaler
FutureWarning for pytorch 2.4+ by @Mon-ius in #3132 - 🐛 [HotFix] Handle Profiler Activities Based on PyTorch Version by @yhna940 in #3136
- only move model to device when model is in cpu and target device is xpu by @faaany in #3133
- fix tip brackets typo by @davanstrien in #3129
- typo of "scalar" instead of "scaler" by @tonyzhaozh in #3116
- MNT Permission for PRs for GH token in stale.yml by @BenjaminBossan in #3112
New Contributors
- @xiabingquan made their first contribution in #3082
- @a-r-r-o-w made their first contribution in #3085
- @152334H made their first contribution in #3088
- @sayakpaul made their first contribution in #3077
- @Mon-ius made their first contribution in #3132
- @davanstrien made their first contribution in #3129
- @tonyzhaozh made their first contribution in #3116
Full Changelog: v0.34.2...v1.0.0