Running difussers with GPU
Running the example codes i see that the CPU and not the GPU is used, is there a way to use GPU instead
Can you be more specific? This was not the intent.
Are you looking at this file or something different?
Thanks for your reply. I just installed the repo and run the example in the read me. This was ran using the CPU although there is a cuda card present. Is there a documentation page for the repo?
# !pip install diffusers transformers
from diffusers import DiffusionPipeline
model_id = "CompVis/ldm-text2im-large-256"
# load model and scheduler
ldm = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
prompt = "A painting of a squirrel eating a burger"
images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6)["sample"]
# save images
for idx, image in enumerate(images):
image.save(f"squirrel-{idx}.png")
Hi @jfdelgad! You can set the pipeline's torch_device explicitly like so:
images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6, torch_device="cuda")["sample"]
However, it should have used CUDA by default if torch.cuda.is_available() == True for your environment. So you might have an issue in your pytorch installation.
The docs for diffusers are still in progress, but they will be out in the next couple of weeks :)
The issue seems to have been on my machine, it works well now.