Added --save_every option in dreambooth script.
Added --save_every option to the dreambooth training script. In contast to --checkpointing_steps, it saves the model where the final model will be stored and does so independently of checkpointing settings. It is intended to make sure the model is saved at smaller intervals than checkpointing.
This behavior cannot be replicated with existing options afaik, if you set a small checkpoint interval, you'll end up with many copies (each ~4GB), so you'll have to set a limit on the number of checkpoints, which will prevent you from recovering deeper checkpoints. So I think this additional option is a nice simple way to make sure the model is saved more regularly. Personally, I'm using this to trigger a generation server to generate images in a separate process using watchdog in order to not slow down the trainer.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
@williamberman gentle ping here
@williamberman @sayakpaul can you take a look here?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Appreciate this! But actually, the recommended way of limiting the number of saved checkpoints should be with the --checkpoints_total_limit flag
Appreciate this! But actually, the recommended way of limiting the number of saved checkpoints should be with the
--checkpoints_total_limitflag
But it still doesn't work expected, no? See: https://github.com/huggingface/diffusers/issues/2466
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.