ControlNet icon indicating copy to clipboard operation
ControlNet copied to clipboard

Cannot switch back from 'Low VRAM' mode without errors

Open shadowlocked opened this issue 2 years ago • 0 comments

I have a 2070S (8GB), and sometimes want a higher resolution or settings that that allows, and turn on low VRAM mode.

What I find is that I cannot turn low VRAM mode off again for any particular model without it throwing errors and failing. Here is an example error, when I used Low VRAM on the Normal model and then tried to turn it off and run it again in regular mode:

Error completing request
Arguments: ('task(qlfi15bpdgxtra4)', 0, '[NAME OF PROMPT] ', '', [], <PIL.Image.Image image mode=RGBA size=866x1300 at 0x28D216421A0>, None, None, None, None, None, None, 115, 16, 4, 0, 1, False, False, 1, 1, 8, 1.5, 0.4, -1.0, -1.0, 0, 0, 0, False, 704, 448, 0, 0, 32, 0, '', '', '', [], 0, True, False, 'LoRA', '[NAME OF LORA]([HASH OF LORA])', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'Refresh models', True, True, 'normal_map', 'controlnetPreTrained_normalV10 [63f96f7c]', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 512, 0.4, 64, 0, 1, False, False, False, 'Denoised', 5.0, 0.0, 0.0, False, 'mp4', 'h264', 2.0, 0.0, 0.0, False, 0.0, True, True, False, False, False, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, None, 50) {}
Traceback (most recent call last):
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\img2img.py", line 171, in img2img
    processed = process_images(p)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\processing.py", line 632, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\processing.py", line 1048, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 322, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 225, in launch_sampling
    return func()
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 322, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 553, in sample_dpmpp_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 117, in forward
    x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 193, in forward2
    return forward(*args, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 136, in forward
    control = param.control_model(x=x_in, hint=param.hint_cond, timesteps=timesteps, context=context)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 115, in forward
    return self.control_model(*args, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 368, in forward
    emb = self.time_embed(t_emb)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 139, in forward
    input = module(input)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\[USERNAME]\Desktop\SD2023\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm)

This bug is per-model. There is nothing stopping me selecting a different controlnet model and all will be well - until I do the same thing again, and then that model will not function on anything but Low VRAM mode until SD is rebooted.

shadowlocked avatar Mar 15 '23 15:03 shadowlocked