AdamW degrades word embeddings in textual inversion training
Describe the bug
Text embeddings intended to be frozen/unchanged, but it doesn't happen. First embedding trains against real embeddings, but as training go on, word embeddings decays to zero.
Reproduction
Not sure if really a bug.
Logs
No response
System Info
Example on colab. Code here too.
cc @patil-suraj
@hadaev8 Might take a look at #855 for a temporary fix.
@duongna21 Im sure fixed it in my codebase by using adam. Not ideal way to fix it, if you want weight decay, easier to apply it manually on new embedding.
Gently ping here again @patil-suraj :-)
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Bump
Hey @hadaev8,
Could you check whether this is fixed by: https://github.com/huggingface/diffusers/pull/1665
Well I guess so, while I would prefer direct reducing affected embs