diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

StableDiffusionControlNetImg2ImgPipeline call report “argument of type 'NoneType' is not iterable”

Open hjj-lmx opened this issue 1 year ago • 1 comments

Describe the bug

argument of type 'NoneType' is not iterable checkpoint = os.path.join(hub_dir, "checkpoints/StableDiffusionXL/model/ud_sdxl-动漫二次元.safetensors") pipe = StableDiffusionControlNetImg2ImgPipeline.from_single_file( checkpoint, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True, variant="fp16" ) 加载不了本地下载的ud_sdxl-动漫二次元.safetensors模型

Reproduction

params = request.json # 获取图片 img_url = params.get('image_url') response = requests.get(img_url) # 检查请求是否成功 if response.status_code == 200: # 将图片数据加载为PIL图像对象 init_image = Image.open(BytesIO(response.content)) else: return jsonify({"message": "读取图片失败"}), 500 # 载入BLIP模型和处理器 model_name = "Salesforce/blip-image-captioning-large" processor = BlipProcessor.from_pretrained(model_name) model = BlipForConditionalGeneration.from_pretrained(model_name, torch_dtype=torch.float16).to("cuda") # 预处理图片和文本输入 inputs = processor(images=init_image, return_tensors="pt").to("cuda") # 使用BLIP模型生成描述文本 with torch.no_grad(): outputs = model.generate(**inputs) description = processor.decode(outputs[0], skip_special_tokens=True)

    hub_dir = get_dir()
    # 加载预训练的 ControlNet 模型
    # controlnet_model = os.path.join(hub_dir, "checkpoints/StableDiffusionXL/controlnet/ud_canny.safetensors")
    controlnet = ControlNetModel.from_pretrained("diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16)
    # 加载预训练的图生图模型
    checkpoint = os.path.join(hub_dir, "checkpoints/StableDiffusionXL/model/ud_sdxl-动漫二次元.safetensors")
    pipe = StableDiffusionControlNetImg2ImgPipeline.from_single_file(
        checkpoint,
        controlnet=controlnet,
        torch_dtype=torch.float16,
        use_safetensors=True,
        variant="fp16"
    )
    pipe.to("cuda")

    # 设置采样器
    scheduler_name = params.get('scheduler', 'DPM++ 2M Karras')
    set_scheduler(pipe, scheduler_name)

    # 检查是否传入了LoRA权重路径列表 格式[["checkpoints/lora1.safetensors", 0.8],["checkpoints/lora2.safetensors", 0.5]]
    lora_weights_list = params.get('lora_weights_list', None)
    if lora_weights_list:
        lora_weights_list = [(os.path.join(hub_dir, lora_path), scale) for lora_path, scale in lora_weights_list]
        apply_multiple_lora_weights(pipe, lora_weights_list)

    # 设置随机生成器,如果提供了种子
    seed = params.get('seed', None)
    if seed is not None and seed != -1:
        torch.manual_seed(seed)
        generator = torch.Generator().manual_seed(seed)
    else:
        generator = None

    # 转换为 NumPy 数组并进行 Canny 边缘检测
    input_image_np = np.array(init_image)
    edges = cv2.Canny(input_image_np, 100, 200)
    # 转换回 PIL 图像
    edges_image = Image.fromarray(edges)

    model_params = {
        "prompt": params.get('prompt', "") + description,                           # 提示词
        "negative_prompt": params.get('negative_prompt', ""),                       # 负向提示词
        "image": init_image,                                                   # 原始图片
        "strength": params.get('strength', 0.85),                                   # 重绘强度
        "height": params.get('height', 1024),                                       # 图片高度
        "width": params.get('width', 1024),                                         # 图片宽度
        "num_inference_steps": params.get('num_inference_steps', 20),               # 采样步数
        "guidance_scale": params.get('guidance_scale', 7),                          # 提示词引导系数
        "num_images_per_prompt": params.get('num_images_per_prompt', 1),            # 每次生成图片的数量
        "generator": generator,
        "control_image": edges_image,
        "controlnet_conditioning_scale": params.get('controlnet_conditioning_scale', 0.4),
        "control_guidance_start": params.get('control_guidance_start', 0),
        "control_guidance_end": params.get('control_guidance_end', 1)
    }

    image = pipe(**model_params).images[0]
    # image_byte = pil_to_bytes(image,"png")
    image.save("D:\\AIdata\\image\\aaaaa.png")

Logs

Fetching 17 files: 100%|██████████| 17/17 [00:00<?, ?it/s]
Loading pipeline components...:  20%|██        | 1/5 [00:00<00:00,  6.98it/s]Some weights of the model checkpoint were not used when initializing CLIPTextModel: 
 ['text_model.embeddings.position_ids']
Loading pipeline components...: 100%|██████████| 5/5 [00:01<00:00,  4.07it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet_img2img.StableDiffusionControlNetImg2ImgPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
D:\Program Files\JetBrains\PyCharm 2023.2.3\plugins\python\helpers\pydev\_pydevd_bundle\pydevd_xml.py:340: FutureWarning: Accessing config attribute `__len__` directly via 'ControlNetModel' object attribute is deprecated. Please access '__len__' over 'ControlNetModel's config object instead, e.g. 'unet.config.__len__'.
  elif hasattr(v, '__len__') and not is_string(v):
E:\Project\UD-REAMSAI\venv\lib\site-packages\diffusers\configuration_utils.py:140: FutureWarning: Accessing config attribute `__len__` directly via 'StableDiffusionControlNetImg2ImgPipeline' object attribute is deprecated. Please access '__len__' over 'StableDiffusionControlNetImg2ImgPipeline's config object instead, e.g. 'scheduler.config.__len__'.
  deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
  0%|          | 0/17 [00:05<?, ?it/s]
2024-06-20 18:24:26.209 | ERROR    | __main__:image_to_anime:219 - Failed to fetch data: argument of type 'NoneType' is not iterable
192.168.1.99 - - [20/Jun/2024 18:24:26] "POST /image_to_anime HTTP/1.1" 200 -

System Info

python3.8 sdxl windows 10

Who can help?

No response

hjj-lmx avatar Jun 20 '24 10:06 hjj-lmx

Can you please provide a more minimal reproducible error snippet? We don't know what is controlnet and how it was initialized. Furthermore, we don't know anything about checkpoint.

Additionally, since all the maintainers of this library understand English, it would be great to have the snippet to pure English if possible.

sayakpaul avatar Jun 21 '24 04:06 sayakpaul

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Sep 14 '24 15:09 github-actions[bot]