Img2Img RuntimeError
I cannot noramlly starting the Img2Img process after installing. Getting"RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same" No matter changeing the input file size using txt2img generation still cannot process the picture Can anyone help me?
Error completing request Arguments: (0, 'a poster for a anime event with a girl in a dress and a cat on her head, and a cat in a dress, and a cat in the background, by NHK Animation', '(((bad anatomy disfigured mutated))), pablo picasso, large breasts, mutated hands, mutated fingers', 'None', 'None', <PIL.Image.Image image mode=RGB size=512x512 at 0x2ACB77960B0>, None, None, None, 0, 34, 0, 4, 1, False, False, 2, 2, 6.5, 0.68, 2777841156.0, -1.0, 0, 0, 0, False, 512, 512, 0, False, 32, 0, '', '', 0, '', '', 1, 50, 0, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, None, '', '<p style="margin-bottom:0.75em">Will upscale the image to twice the dimensions; use width and height sliders to set tile size</p>', 64, 0, 1, '', 4, '', True, False) {} Traceback (most recent call last): File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\modules\ui.py", line 184, in f res = list(func(*args, **kwargs)) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\webui.py", line 64, in f res = func(*args, **kwargs) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\modules\img2img.py", line 124, in img2img processed = process_images(p) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\modules\processing.py", line 334, in process_images p.init(all_prompts, all_seeds, all_subseeds) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\modules\processing.py", line 630, in init self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image)) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 863, in encode_first_stage return self.first_stage_model.encode(x) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\autoencoder.py", line 325, in encode h = self.encoder(x) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\repositories\stable-diffusion\ldm\modules\diffusionmodules\model.py", line 439, in forward hs = [self.conv_in(x)] File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\Users\User\Desktop\Stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same
First, try to help yourself by seeing if you can post your issue by quoting code properly using the icons at the top of the edit box. Secondly, post your code, and info about your setup - how you installed it, your GPU, Python version, and PyTorch version. Thirdly, tell us some things you've tried.
I'm getting this error when I enable "full_precision" flag. I've tried different weight options 512,384 & 256. I've also tried giving an image of lower resolution (600x600) but that didn't help either.
Details
- Graphics card = GeForce 1660 Ti
- Windows 10
- System RAM = 64GB
- Python version 3.9.12
- Launch script "img2img_gradio.py"
Is there anything I can change in the script ?
How to change the output image size for img2img.sh?