Dreambooth finetune FLUX dev CLIPTextModel
Describe the bug
ValueError: Sequence length must be less than max_position_embeddings (got sequence length: 77 and max_position_embeddings: 0
I used four A100 to full amount of fine-tuning Flux. 1 dev model, according to https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md
I used the toy dog dataset (5 images) for fine-tuning. I ran into a problem with max_position_embeddings for CLIPTextModel:
Reproduction
[rank1]: Traceback (most recent call last):
[rank1]: File "/data/AIGC/diffusers/examples/dreambooth/train_dreambooth_flux.py", line 1812, in sequence length: 77 and max_position_embeddings: 0
I changed max_position_embeddings in CLIPTextModel but it doesn't work: text_encoder_one = class_one.from_pretrained( args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, variant=args.variant, max_position_embeddings=77,ignore_mismatched_sizes=True )
My training script is as follows:
export MODEL_NAME="black-forest-labs/FLUX.1-dev" export INSTANCE_DIR="dog" export OUTPUT_DIR="trained-flux"
accelerate launch train_dreambooth_flux.py
--pretrained_model_name_or_path=$MODEL_NAME
--instance_data_dir=$INSTANCE_DIR
--output_dir=$OUTPUT_DIR
--mixed_precision="bf16"
--instance_prompt="a photo of sks dog"
--resolution=1024
--train_batch_size=1
--guidance_scale=1
--gradient_accumulation_steps=4
--optimizer="prodigy"
--learning_rate=1.
--report_to="wandb"
--lr_scheduler="constant"
--lr_warmup_steps=0
--max_train_steps=500
--validation_prompt="A photo of sks dog in a bucket"
--validation_epochs=25
--seed="0"
--push_to_hub
Logs
System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-5.4.0-146-generic-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.10.16
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.1
- Transformers version: 4.49.0
- Accelerate version: 1.4.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA A100-SXM4-40GB, 40960 MiB NVIDIA A100-SXM4-40GB, 40960 MiB NVIDIA A100-SXM4-40GB, 40960 MiB NVIDIA A100-SXM4-40GB, 40960 MiB
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help?
No response
same here, following
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
same question
same question as well
I run into the same problem when using Deepspeed stage 2 to 3 (modifying accelerate config) stage 2 config (work)
compute_environment: LOCAL_MACHINE debug: false deepspeed_config: gradient_accumulation_steps: 1 offload_optimizer_device: none offload_param_device: none zero3_init_flag: false zero_stage: 2 distributed_type: DEEPSPEED downcast_bf16: 'no' enable_cpu_affinity: false machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false
stage 3 config modification (fail)
zero_stage: 3
for reference, maybe it is related to accelerate or deepspeed?
same here, following
same here, following
same question
same here, following
same here, following
same here,following
same here,following
same here,following
same here,following
anyone fix this?