Morgon Kanter
Morgon Kanter
> @mx any opinion on this implementation Seems fine to me, I'll withdraw my PR. Do we even need the additional setting though? The user already has to opt-in to...
Could this be considered again for inclusion? It would be really helpful to be able to stop training the text encoder at a certain point but continue to train the...
@DevArqSangoi If you do it that way, could active regularization effects still reduce the weights, depending on the optimizer in use? That would be a problem if they could.
It shouldn't matter for Adam or AdamW or the optimizers based on them, either. Just not sure what weirdness is out there. I suppose it should really work fine for...
> It is a brutish way. I didn't say it wouldn't work, just it is a sledgehammer for the lazy. The real way no current "dev" wants to tackle if...
The CPU warning is superfluous, it's because the models are temporarily stashed there. I suspect you are getting black samples because of a setting error causing NaNs. Without further details...
Those parts should be resizable now.
Your diffusers version looks wrong. The output of "pip freeze" in your venv should be something like: > -e git+https://github.com/huggingface/diffusers.git@5d848ec#egg=diffusers How did you actually install OneTrainer?
The "pip freeze" still looks suspect to me. Is that the pip freeze from inside the venv? I'm seeing stuff there that wouldn't have been installed by the requirements.txt, like...
For reference, this is my own "venv\scripts\pip.exe freeze". Note the different torch version, which might also be your problem: ``` absl-py==2.1.0 accelerate==0.25.0 aiohttp==3.9.5 aiosignal==1.3.1 anndata==0.10.6 antlr4-python3-runtime==4.9.3 array_api_compat==1.5.1 async-timeout==4.0.3 attrs==23.2.0 bitsandbytes==0.43.0...