CogVideo
CogVideo copied to clipboard
Loss Not Decreasing When Fine-Tuning CogVideoX Models with Default Settings
Hello, I fine-tuned CogVideoX-2B-LoRA on my own dataset, which has around 10k samples. The learning rate was set to 1e-4 and the LoRA rank was set to 256, but the loss kept fluctuating around 0.1 and did not decrease. I also tried fine-tuning cogvideox-5b-SFT and encountered the same issue. The rest of the parameters were kept the same as those provided in the original script. Has anyone else experienced this? Are there any directions I could try modifying?
The batch_size is 32.