diffusers
diffusers copied to clipboard
How should the loss graph look when finetuning an inpainting model?
@thedarkzeno @patil-suraj thank you for the amazing work here!!!: https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint
Questions:
- Should loss be decreasing while running train_dreambooth_inpaint.py?
- If it should, how would you go about debugging why it isn't?
Background Context for the Questions:
I've run through the first example (copied below) a few times and after I upload to HuggingFace, I keep seeing in Tensorboard that loss is not decreasing (https://huggingface.co/lakshman111/traditional-512-2ksteps-model/tensorboard).
export MODEL_NAME="runwayml/stable-diffusion-inpainting"
export INSTANCE_DIR="path-to-instance-images"
export OUTPUT_DIR="path-to-save-model"
accelerate launch train_dreambooth_inpaint.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--instance_prompt="a photo of sks dog" \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=1 \
--learning_rate=5e-6 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=400
- My goal: Define "traditional furniture" as a style of furniture
- My setup:
- I'm currently fine tuning with 11 images (all 512x512 jpgs) on a GPU.
- Most of the images have some part of the furniture cut off as they were cropped
- I've tested with 400 steps, 1k steps, and 2k steps. I think 400 steps worked just as good as the others and took a fraction of the time