Bump accelerate from 1.6.0 to 1.7.0 in /.github/actions/compile-models
Bumps accelerate from 1.6.0 to 1.7.0.
Release notes
Sourced from accelerate's releases.
v1.7.0 : Regional compilation, Layerwise casting hook, FSDPv2 + QLoRA
Regional compilation
Instead of compiling the entire model at once, regional compilation targets repeated blocks (such as decoder layers) first. This allows the compiler to cache and reuse optimized code for subsequent blocks, significantly reducing the cold start compilation time typically seen during the first inference. Thanks
@IlyasMoutawwakilfor the feature ! You can view the full benchmark here, and check out our updated compilation guide for more details!
To enable this feature, set
use_regional_compilation=Truein theTorchDynamoPluginconfiguration.# Configure the compilation backend dynamo_plugin = TorchDynamoPlugin( use_regional_compilation=True, ... # other parameters ) # Initialize accelerator with the plugin accelerator = Accelerator(dynamo_plugin=dynamo_plugin) # This will apply compile_regions to your model model = accelerator.prepare(model)Layerwise casting hook
We've introduced a new hook that enables per-layer upcasting and downcasting (e.g., for Linear layers) during inference. This allows users to run models with separate storage and compute dtypes, resulting in memory savings. The concept was first implemented in diffusers, where downcasting models to FP8 proved effective without major quality degradation. Contributed by
@sayakpaulin huggingface/accelerate#3427model = .... storage_dtype = torch.float8_e4m3fn compute_dtype = torch.bfloat16 attach_layerwise_casting_hooks( model, storage_dtype=storage_dtype, compute_dtype=compute_dtype, )Better FSDP2 support
This release includes numerous new features and bug fixes. Notably, we’ve added support for
FULL_STATE_DICT, a widely used option in FSDP, now enabling.save_pretrained()in transformers to work with FSDP2 wrapped models. QLoRA training is now supported as well but more testing is needed. We have also resolved a backend issue related to parameter offloading to CPU. Additionally, a significant memory spike that occurred whencpu_ram_efficient_loading=Truewas enabled has been fixed. Several other minor improvements and fixes are also included—see the What’s Changed section for full details.
FULL_STATE_DICThave been enabled by@S1ro1in huggingface/accelerate#3527- QLoRA support by
@winglianin huggingface/accelerate#3546- set backend correctly for CUDA+FSDP2+cpu-offload in huggingface/accelerate#3574
- memory spike fixed when using
cpu_ram_efficient_loading=Trueby@S1ro1in huggingface/accelerate#3482Better HPU support:
We have added a documentation for Intel Gaudi hardware ! The support is already available since v1.5.0 through this PR.
... (truncated)
Commits
9cb1a6bRelease: v1.7.097c93c4enable test_dispatch_model_tied_weights_memory_with_nested_offload_cpu on xpu...cd37bbbset backend correctly for CUDA+FSDP2+cpu-offload (#3574)7aa3b56Fix prevent duplicate GPU usage in distributed processing (#3526)14f4306reenable FSDP2+qlora support (#3546)e6e7175Add regional compilation to cli tools and env vars (#3572)1f6efcetune env command output (#3570)9fa97f9simplify model.to logic (#3562)764eee4add xpu synchronize (#3563)202e6c1Update dynamic env handling to preserve None when USE_DYNAMIC is unset (#3567)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebasewill rebase this PR -
@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it -
@dependabot mergewill merge this PR after your CI passes on it -
@dependabot squash and mergewill squash and merge this PR after your CI passes on it -
@dependabot cancel mergewill cancel a previously requested merge and block automerging -
@dependabot reopenwill reopen this PR if it is closed -
@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency -
@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)