DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

[bugfix] update results of state_dict loading, embedding resizing to secondary partitions (hpz)

Open cyr0930 opened this issue 11 months ago • 6 comments

After this commit (https://github.com/deepspeedai/DeepSpeed/pull/4906), secondary partitioned tensors are updated only after optimizer.step().

When loading state_dict or resizing embedding after init, secondary partitioned tensors should be updated.

e.g., https://github.com/huggingface/transformers/blob/1c4b62b219323a31011bac3bd3cece7675d9e4c3/src/transformers/integrations/deepspeed.py#L344

cyr0930 avatar Mar 11 '25 08:03 cyr0930

@cyr0930, thanks for this PR. Can you provide more context about the failure this fixes? For example, did you encounter convergence issues after checkpoint loading?

tjruwase avatar Mar 12 '25 10:03 tjruwase

Partitioned parameters are updated only when ds_secondary_partition_tensor is None by this commit (https://github.com/deepspeedai/DeepSpeed/pull/4906). And ds_secondary_partition_tensors only become None after optimizer.step function is called (that function contains logic that invalidate secondary tensor) But there's other cases that partitioned parameters should be updated such as loading state_dict or resizing embeddings. (Above explanation is based on transformers implementation)

For now, after parameter initialization, ds_secondary_partition_tensors are created and existed for each params, so they are not updated when we perform state_dict loading or embedding resizing. Maybe we can add a function that invalidate ds_secondary_partition and call it after every parameter changes. But IMO it is quite messy and I decided to revert has_been_updated part of the previous commit.

cyr0930 avatar Mar 12 '25 12:03 cyr0930

For now, after parameter initialization, ds_secondary_partition_tensors are created and existed for each params, so they are not

@cyr0930, apologies for the delay on this. My understanding is the ds_secondary_partition_tensors are created by forward pass, correct? But I expect that state_dict loading or embedding resizing should happen before forward pass. Can you please clarify? Thanks

tjruwase avatar Apr 09 '25 18:04 tjruwase

I'm not sure because this logic is a bit complicated, but IMO,

while HfArgumentParser.parse_args_into_dataclasses is executed deepspeed zero3 is enabled by this (https://github.com/huggingface/transformers/blob/v4.51.2/src/transformers/training_args.py#L2046).

And while load model by from_pretrained method, deepspeed.zero.Init context is introduced by this (https://github.com/huggingface/transformers/blob/v4.51.2/src/transformers/modeling_utils.py#L3727).

This context wrapped initialization of modules, so parameters are converted to deepspeed parameters during initialization in here (https://github.com/deepspeedai/DeepSpeed/blob/master/deepspeed/runtime/zero/partition_parameters.py#L1107).

That's what I get from debugging for now. Thanks for the comment though.

cyr0930 avatar Apr 10 '25 11:04 cyr0930

@cyr0930, thanks for sharing this information and for debugging. As this is a critical part of hpz, can you please share your repro steps with me so I can try on my side?

tjruwase avatar Apr 10 '25 14:04 tjruwase

This is a minimal reproducing code I can make. Running this code with the command at the bottom can reproduce the issue I encountered :)

deepspeed_init.py

from transformers import AutoModel
from transformers.hf_argparser import HfArgumentParser
from transformers.training_args import TrainingArguments
from transformers.integrations.deepspeed import is_deepspeed_zero3_enabled

parser = HfArgumentParser((TrainingArguments,))
training_args = parser.parse_args_into_dataclasses()
assert is_deepspeed_zero3_enabled()
model = AutoModel.from_pretrained("Qwen/Qwen2.5-0.5B")
assert hasattr(model.embed_tokens.weight, "ds_secondary_tensor") ```

# accelerate launch --use_deepspeed --zero_stage 3 deepspeed_init.py

cyr0930 avatar Apr 14 '25 12:04 cyr0930