deforum-stable-diffusion icon indicating copy to clipboard operation
deforum-stable-diffusion copied to clipboard

Re: Getting 512-depth-ema.ckpt to work in notebook version of Deforum

Open reallybigname opened this issue 3 years ago • 0 comments

I am specifically interested in the new depth2img model after playing with it using img2img on webui. I think it will work very nicely with hybrid video. But, I don't know anything about model wrappers. However, I did get past the initial error you get if you try to use the depth ckpt in a deforum notebook.

You'll get an error about a midas model not being located in a specific directory (I forget the name, it's like midas_models or something - not important). Anyway, in tracking that down I realized that we didn't choose the path in our code, and it was being chosen by the api.

Then, I looked at AUTOMATIC1111's function for auto-downloading and changing the path to midas models, and he left very useful comments (see function below). I adapted the function to work in my local deforum, but it is mostly the same. I passed in root to get the models_path, and changed the folder to "models", since deforum already puts midas models in there (webui puts them in subfolder).

After that, it successfully downloads the model, and I get past that error. This seems like a good approach, because it can handle future dpt_hybrid models should they request a different model. I'm putting all this here in case someone who knows about this area of the code wants to use what I've learned so far to resolve this pressing issue!


I call the function right before it loads the ckpt in load_model function. Here's where I call it:

    if load_on_run_all and ckpt_valid:
        local_config = OmegaConf.load(f"{ckpt_config_path}")

        if ckpt_path.find("depth") != -1:
            enable_midas_autodownload(root)

        model = load_model_from_config(local_config, f"{ckpt_path}", half_precision=root.half_precision)

I put the function right after that at the end of model_load.py

def enable_midas_autodownload(root):
    """
    Gives the ldm.modules.midas.api.load_model function automatic downloading.

    When the 512-depth-ema model, and other future models like it, is loaded,
    it calls midas.api.load_model to load the associated midas depth model.
    This function applies a wrapper to download the model to the correct
    location automatically.
    """

    midas_path = root.models_path

    **# stable-diffusion-stability-ai hard-codes the midas model path to
    # a location that differs from where other scripts using this model look.
    # HACK: Overriding the path here.**
    for k, v in midas.api.ISL_PATHS.items():
        file_name = os.path.basename(v)
        midas.api.ISL_PATHS[k] = os.path.join(midas_path, file_name)

    midas_urls = {
        "dpt_large": "https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt",
        "dpt_hybrid": "https://github.com/intel-isl/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt",
        "midas_v21": "https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21-f6b98070.pt",
        "midas_v21_small": "https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21_small-70d6b9c8.pt",
    }

    midas.api.load_model_inner = midas.api.load_model

    def load_model_wrapper(model_type):
        path = midas.api.ISL_PATHS[model_type]
        if not os.path.exists(path):
            if not os.path.exists(midas_path):
                os.makedirs(midas_path, exist_ok=True)
    
            print(f"Downloading midas model weights for {model_type} to {path}")
            urllib.request.urlretrieve(midas_urls[model_type], path)
            print(f"{model_type} downloaded")

        return midas.api.load_model_inner(model_type)

    midas.api.load_model = load_model_wrapper

The model loads, and then you get a long error trace, but here's the last bit of it, from the model_wrapper.py:

File ~\python\deforum-0.6dev\deforum-stable-diffusion-dev\helpers\model_wrap.py:133, in CFGDenoiserWithGrad.forward(self, x, sigma, uncond, cond, cond_scale)
    130 if self.cond_uncond_sync:
    131     # x0 = self.cfg_cond_model_fn_(x, sigma, uncond=uncond, cond=cond, cond_scale=cond_scale)
    132     cond_in = torch.cat([uncond, cond])
--> 133     x0 = self.cond_model_fn_(x, sigma, cond=cond_in, inner_model=_cfg_model)
    135 # Calculate cond and uncond separately
    136 else:
    137     if self.gradient_add_to == "uncond":

File ~\python\deforum-0.6dev\deforum-stable-diffusion-dev\helpers\model_wrap.py:94, in CFGDenoiserWithGrad.cond_model_fn_(self, x, sigma, inner_model, **kwargs)
     92 elif self.gradient_wrt == 'x0_pred':
     93     with torch.no_grad():
---> 94         denoised = inner_model(x, sigma, **kwargs)
     95     with torch.enable_grad():
     96         cond_grad = cond_fn(x, sigma, denoised=denoised.detach().requires_grad_(), **kwargs).detach()

File ~\python\deforum-0.6dev\deforum-stable-diffusion-dev\helpers\model_wrap.py:122, in CFGDenoiserWithGrad.forward.<locals>._cfg_model(x, sigma, cond, **kwargs)
    119 x_in = torch.cat([x] * 2)
    120 sigma_in = torch.cat([sigma] * 2)
--> 122 denoised = self.inner_model(x_in, sigma_in, cond=cond, **kwargs)
    123 uncond_x0, cond_x0 = denoised.chunk(2)
    124 x0_pred = uncond_x0 + (cond_x0 - uncond_x0) * cond_scale

File ~\anaconda3\envs\dsd\lib\site-packages\torch\nn\modules\module.py:1190, in Module._call_impl(self, *input, **kwargs)
   1186 # If we don't have any hooks, we want to skip the rest of the logic in
   1187 # this function, and just call forward.
   1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190     return forward_call(*input, **kwargs)
   1191 # Do not call functions when jit is used
   1192 full_backward_hooks, non_full_backward_hooks = [], []

File ~\python\deforum-0.6dev\deforum-stable-diffusion-dev\src\k_diffusion\external.py:112, in DiscreteEpsDDPMDenoiser.forward(self, input, sigma, **kwargs)
    110 def forward(self, input, sigma, **kwargs):
    111     c_out, c_in = [utils.append_dims(x, input.ndim) for x in self.get_scalings(sigma)]
--> 112     eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
    113     return input + eps * c_out

File ~\python\deforum-0.6dev\deforum-stable-diffusion-dev\src\k_diffusion\external.py:138, in CompVisDenoiser.get_eps(self, *args, **kwargs)
    137 def get_eps(self, *args, **kwargs):
--> 138     return self.inner_model.apply_model(*args, **kwargs)

File ~\python\deforum-0.6dev\deforum-stable-diffusion-dev\src\ldm\models\diffusion\ddpm.py:858, in LatentDiffusion.apply_model(self, x_noisy, t, cond, return_ids)
    855     key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn'
    856     cond = {key: cond}
--> 858 x_recon = self.model(x_noisy, t, **cond)
    860 if isinstance(x_recon, tuple) and not return_ids:
    861     return x_recon[0]

File ~\anaconda3\envs\dsd\lib\site-packages\torch\nn\modules\module.py:1190, in Module._call_impl(self, *input, **kwargs)
   1186 # If we don't have any hooks, we want to skip the rest of the logic in
   1187 # this function, and just call forward.
   1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1189         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190     return forward_call(*input, **kwargs)
   1191 # Do not call functions when jit is used
   1192 full_backward_hooks, non_full_backward_hooks = [], []

File ~\python\deforum-0.6dev\deforum-stable-diffusion-dev\src\ldm\models\diffusion\ddpm.py:1331, in DiffusionWrapper.forward(self, x, t, c_concat, c_crossattn, c_adm)
   1329     out = self.diffusion_model(x, t, context=cc)
   1330 elif self.conditioning_key == 'hybrid':
-> 1331     xc = torch.cat([x] + c_concat, dim=1)
   1332     cc = torch.cat(c_crossattn, 1)
   1333     out = self.diffusion_model(xc, t, context=cc)

TypeError: can only concatenate list (not "NoneType") to list

reallybigname avatar Jan 05 '23 20:01 reallybigname