diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Error load inpainting model in float16

Open EnricoBeltramo opened this issue 3 years ago • 0 comments

Describe the bug

Following this tutorial: https://huggingface.co/runwayml/stable-diffusion-inpainting

I tried to made to load the model with float16, but I have follow error: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

If I try without float16, the model load correctly

Reproduction

from diffusers import StableDiffusionInpaintPipeline

pipe = StableDiffusionInpaintPipeline.from_pretrained( "runwayml/stable-diffusion-inpainting", revision="fp16", torch_dtype=torch.float16, ) prompt = "Face of a yellow cat, high resolution, sitting on a park bench" #image and mask_image should be PIL images. #The mask structure is white for inpainting and black for keeping as is image = pipe(prompt=prompt, image=img, mask_image=mask).images[0]

Logs

/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py:27 in decorate_context        │
│                                                                                                  │
│    24 │   │   @functools.wraps(func)                                                             │
│    25 │   │   def decorate_context(*args, **kwargs):                                             │
│    26 │   │   │   with self.clone():                                                             │
│ ❱  27 │   │   │   │   return func(*args, **kwargs)                                               │
│    28 │   │   return cast(F, decorate_context)                                                   │
│    29 │                                                                                          │
│    30 │   def _wrap_generator(self, func):                                                       │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diff │
│ usion_inpaint.py:786 in __call__                                                                 │
│                                                                                                  │
│   783 │   │   │   do_classifier_free_guidance,                                                   │
│   784 │   │   │   negative_prompt,                                                               │
│   785 │   │   │   prompt_embeds=prompt_embeds,                                                   │
│ ❱ 786 │   │   │   negative_prompt_embeds=negative_prompt_embeds,                                 │
│   787 │   │   )                                                                                  │
│   788 │   │                                                                                      │
│   789 │   │   # 4. Preprocess mask and image                                                     │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diff │
│ usion_inpaint.py:396 in _encode_prompt                                                           │
│                                                                                                  │
│   393 │   │   │                                                                                  │
│   394 │   │   │   prompt_embeds = self.text_encoder(                                             │
│   395 │   │   │   │   text_input_ids.to(device),                                                 │
│ ❱ 396 │   │   │   │   attention_mask=attention_mask,                                             │
│   397 │   │   │   )                                                                              │
│   398 │   │   │   prompt_embeds = prompt_embeds[0]                                               │
│   399                                                                                            │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1194 in _call_impl             │
│                                                                                                  │
│   1191 │   │   # this function, and just call forward.                                           │
│   1192 │   │   if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o  │
│   1193 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                   │
│ ❱ 1194 │   │   │   return forward_call(*input, **kwargs)                                         │
│   1195 │   │   # Do not call functions when jit is used                                          │
│   1196 │   │   full_backward_hooks, non_full_backward_hooks = [], []                             │
│   1197 │   │   if self._backward_hooks or _global_backward_hooks:                                │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/transformers/models/clip/modeling_clip.py:822 in forward  │
│                                                                                                  │
│    819 │   │   │   position_ids=position_ids,                                                    │
│    820 │   │   │   output_attentions=output_attentions,                                          │
│    821 │   │   │   output_hidden_states=output_hidden_states,                                    │
│ ❱  822 │   │   │   return_dict=return_dict,                                                      │
│    823 │   │   )                                                                                 │
│    824                                                                                           │
│    825                                                                                           │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1194 in _call_impl             │
│                                                                                                  │
│   1191 │   │   # this function, and just call forward.                                           │
│   1192 │   │   if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o  │
│   1193 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                   │
│ ❱ 1194 │   │   │   return forward_call(*input, **kwargs)                                         │
│   1195 │   │   # Do not call functions when jit is used                                          │
│   1196 │   │   full_backward_hooks, non_full_backward_hooks = [], []                             │
│   1197 │   │   if self._backward_hooks or _global_backward_hooks:                                │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/transformers/models/clip/modeling_clip.py:731 in forward  │
│                                                                                                  │
│    728 │   │   │   causal_attention_mask=causal_attention_mask,                                  │
│    729 │   │   │   output_attentions=output_attentions,                                          │
│    730 │   │   │   output_hidden_states=output_hidden_states,                                    │
│ ❱  731 │   │   │   return_dict=return_dict,                                                      │
│    732 │   │   )                                                                                 │
│    733 │   │                                                                                     │
│    734 │   │   last_hidden_state = encoder_outputs[0]                                            │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1194 in _call_impl             │
│                                                                                                  │
│   1191 │   │   # this function, and just call forward.                                           │
│   1192 │   │   if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o  │
│   1193 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                   │
│ ❱ 1194 │   │   │   return forward_call(*input, **kwargs)                                         │
│   1195 │   │   # Do not call functions when jit is used                                          │
│   1196 │   │   full_backward_hooks, non_full_backward_hooks = [], []                             │
│   1197 │   │   if self._backward_hooks or _global_backward_hooks:                                │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/transformers/models/clip/modeling_clip.py:658 in forward  │
│                                                                                                  │
│    655 │   │   │   │   │   hidden_states,                                                        │
│    656 │   │   │   │   │   attention_mask,                                                       │
│    657 │   │   │   │   │   causal_attention_mask,                                                │
│ ❱  658 │   │   │   │   │   output_attentions=output_attentions,                                  │
│    659 │   │   │   │   )                                                                         │
│    660 │   │   │                                                                                 │
│    661 │   │   │   hidden_states = layer_outputs[0]                                              │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1194 in _call_impl             │
│                                                                                                  │
│   1191 │   │   # this function, and just call forward.                                           │
│   1192 │   │   if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o  │
│   1193 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                   │
│ ❱ 1194 │   │   │   return forward_call(*input, **kwargs)                                         │
│   1195 │   │   # Do not call functions when jit is used                                          │
│   1196 │   │   full_backward_hooks, non_full_backward_hooks = [], []                             │
│   1197 │   │   if self._backward_hooks or _global_backward_hooks:                                │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/transformers/models/clip/modeling_clip.py:382 in forward  │
│                                                                                                  │
│    379 │   │   """                                                                               │
│    380 │   │   residual = hidden_states                                                          │
│    381 │   │                                                                                     │
│ ❱  382 │   │   hidden_states = self.layer_norm1(hidden_states)                                   │
│    383 │   │   hidden_states, attn_weights = self.self_attn(                                     │
│    384 │   │   │   hidden_states=hidden_states,                                                  │
│    385 │   │   │   attention_mask=attention_mask,                                                │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py:1194 in _call_impl             │
│                                                                                                  │
│   1191 │   │   # this function, and just call forward.                                           │
│   1192 │   │   if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o  │
│   1193 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                   │
│ ❱ 1194 │   │   │   return forward_call(*input, **kwargs)                                         │
│   1195 │   │   # Do not call functions when jit is used                                          │
│   1196 │   │   full_backward_hooks, non_full_backward_hooks = [], []                             │
│   1197 │   │   if self._backward_hooks or _global_backward_hooks:                                │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/torch/nn/modules/normalization.py:191 in forward          │
│                                                                                                  │
│   188 │                                                                                          │
│   189 │   def forward(self, input: Tensor) -> Tensor:                                            │
│   190 │   │   return F.layer_norm(                                                               │
│ ❱ 191 │   │   │   input, self.normalized_shape, self.weight, self.bias, self.eps)                │
│   192 │                                                                                          │
│   193 │   def extra_repr(self) -> str:                                                           │
│   194 │   │   return '{normalized_shape}, eps={eps}, ' \                                         │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py:2515 in layer_norm                 │
│                                                                                                  │
│   2512 │   │   return handle_torch_function(                                                     │
│   2513 │   │   │   layer_norm, (input, weight, bias), input, normalized_shape, weight=weight, b  │
│   2514 │   │   )                                                                                 │
│ ❱ 2515 │   return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.c  │
│   2516                                                                                           │
│   2517                                                                                           │
│   2518 def group_norm(                                                                           │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

System Info

System Info diffusers version: 0.14.0.dev0 Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid Python version: 3.7.12 PyTorch version (GPU?): 1.13.1+cu116 (True) Huggingface_hub version: 0.12.1 Transformers version: 4.26.1 Accelerate version: 0.16.0 xFormers version: 0.0.16 Using GPU in script?: YES Using distributed or parallel set-up in script?: NO

EnricoBeltramo avatar Feb 20 '23 23:02 EnricoBeltramo