set `is_sequential_cpu_offload = True` only when some component is on cpu and has AlignDevicesHook simultaneously
What does this PR do?
this pr is talking about when the is_sequential_cpu_offload should be set to True.
before we got device_map feature for pipline, if any component in pipeline has AlignDevicesHook which is used to move input data to the model device, we will set the is_sequential_cpu_offload = True . but when using device_map, we will also add AlignDevicesHook to model.
and besides, if someone want to add AignDevicesHook to model manually, is_sequential_cpu_offload will also be set to True.
that would trigger a bug in load_lora_weights() method
so maybe we should set is_sequential_cpu_offload=True when some component is on cpu and has AlignDevicesHook simultaneously
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the contributor guideline?
- [ ] Did you read our philosophy doc (important for complex PRs)?
- [ ] Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
Let us know when this PR is ready for a review.
alright, i will modify all stuff that is correspoding to is_sequential_cpu_offload
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
i have check all stuff is correspoding to is_sequential_cpu_offload, please take a look @sayakpaul ,thanks!
a gentle ping here @sayakpaul @yiyixuxu
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Hi, sorry for the delay here. I've asked Sayak for a review on this
Hi,
Thanks for your PR. Could you demonstrate your use-case with some minimal code for us to understand this better?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
sorry for delay.
Hi,
Thanks for your PR. Could you demonstrate your use-case with some minimal code for us to understand this better?
there are two cases that might get error with the current verison of diffusers:
when you want to manually add align_device_hook.
(1) the present device_map feature is not granular enough to achieve precise memory control. sometimes it's better to decide which model should be on which gpu by user instead of "balanced" or "auto".
(2) or when you want to implement some custom offloading strategy like block swap on some model in the pipeline