Anton Lozhkov
Anton Lozhkov
Hi @falcaopetri! Looks like `test_decoder_batch_1` indeed fails on Linux too, maybe you could take a look and find a workaround?
Hi @taki0112! Pinging @patil-suraj to check, but it's probably a bit of logic that was left for compatibility with future models :)
Hi @skuma307! Could you try hitting "Restart runtime" in the colab, and re-running `!pip install diffusers transformers` before any imports?
Hi @jfdelgad! You can set the pipeline's `torch_device` explicitly like so: ```python images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6, torch_device="cuda")["sample"] ``` However, it should have used CUDA by default if `torch.cuda.is_available()...
Hi @Lyn-Qiu! Judging by the stack trace, you're using something other than [`diffusers`](https://pypi.org/project/diffusers/) to run your training script, could this be a typo during installation? :slightly_smiling_face:
This wasn't the case with a pip install from git, so something might be missing in our pypi pipeline (maybe `MANIFEST.in` isn't getting picked up?).
@patrickvonplaten could you please check that the file is getting packaged when doing a release? Asking because of this: https://github.com/huggingface/diffusers/pull/136#issuecomment-1193327577
@patrickvonplaten for context: @nateraw is already adding `modelcards` to `hf_hub`, so this was a POC for that integration: https://github.com/huggingface/huggingface_hub/pull/940
Hi @jfdelgad and @taki0112! To log in from a colab notebook, it's enough to simply run `!huggingface-cli login` and paste your API token from https://huggingface.co/settings/tokens into the field that appears...
Hi @NeethanShetty, thanks for bringing it up! Now it's possible to use a local image folder to train the model by passing `--train_data_dir ` instead of `--dataset`/`--dataset_name` Also added a...