stable-diffusion icon indicating copy to clipboard operation
stable-diffusion copied to clipboard

Disabling safety-model or fixing false positives?

Open bartekleon opened this issue 3 years ago • 22 comments

I really have wanted to try this project so i tried using it with diffusers (default configurion for this one throws lack of memory and that one actually runs). I am getting "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." errpr all the time. With a default prompt. Or a "star" if "riding horse is NSFW". Is it possible to debug the runs it so the "handmade" safety feature is fixed or checked? Or is there possibility to run this without this safety model or with it switched off? Running default

import torch
from torch import autocast
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
    "./stable-diffusion-v1-4",
    revision="fp16", 
    torch_dtype=torch.float16
)
pipe = pipe.to("cuda")

prompt = "star"
with autocast("cuda"):
    image = pipe(prompt).images[0]  
image.save("star.png")

(tried seed 12345 if someone wants to try reproduce. Maybe my PC is not safe for work or something) Thanks in advance

bartekleon avatar Sep 09 '22 04:09 bartekleon

i agree a nsfw toggle would be nice, sometime just asking for a swimsuit trigger it.

tails101 avatar Sep 09 '22 09:09 tails101

disabling it is easy, you can do this:

pipe.safety_checker = lambda images, clip_input: (images, False)

rogeriochaves avatar Sep 09 '22 11:09 rogeriochaves

I also found the checker was inaccurately flagging some of my prompts as NSFW, so I disabled it in scripts/txt2img.py, by replacing check_safety with:

def check_safety(x_image):
    # safety_checker_input = safety_feature_extractor(numpy_to_pil(x_image), return_tensors="pt")
    # x_checked_image, has_nsfw_concept = safety_checker(images=x_image, clip_input=safety_checker_input.pixel_values)
    # assert x_checked_image.shape[0] == len(has_nsfw_concept)
    # for i in range(len(has_nsfw_concept)):
    #     if has_nsfw_concept[i]:
    #         x_checked_image[i] = load_replacement(x_checked_image[i])
    # return x_checked_image, has_nsfw_concept
    return x_image, False

peterbayerle avatar Sep 11 '22 08:09 peterbayerle

It seems my issue was the problem with GTX 1660 and numpy with fp16. So basically it was making these all green images, which were later marked by NSFW filter as NSFW and turned them black. It still is strange behaviour, but using the low memory fork with the higher quality fixed it

bartekleon avatar Sep 12 '22 16:09 bartekleon

disabling it is easy, you can do this:

pipe.safety_checker = lambda images, clip_input: (images, False)

All this did was disable the message, the image still came out black.

TheConceptBoy avatar Sep 23 '22 07:09 TheConceptBoy

It seems my issue was the problem with GTX 1660 and numpy with fp16. So basically it was making these all green images, which were later marked by NSFW filter as NSFW and turned them black. It still is strange behaviour, but using the low memory fork with the higher quality fixed it

Could you share the link to the low memory fork? The issue persists with a GTX1650 as well.

TheConceptBoy avatar Sep 23 '22 07:09 TheConceptBoy

It seems my issue was the problem with GTX 1660 and numpy with fp16. So basically it was making these all green images, which were later marked by NSFW filter as NSFW and turned them black. It still is strange behaviour, but using the low memory fork with the higher quality fixed it

Could you share the link to the low memory fork? The issue persists with a GTX1650 as well.

https://github.com/basujindal/stable-diffusion It was this one. I had to add "--precision full" tho in order for it to work correctly. Although it does increase VRAM it is the only thing to make it work "If you have a Nvidia GTX series GPU, the output images maybe entirely green in color. This is because GTX series do not support half precision calculation, which is the default mode of calculation in this repository. To overcome the issue, use the --precision full argument. The downside is that it will lead to higher GPU VRAM usage." It should take ~3GB VRAM for 512x512

bartekleon avatar Oct 02 '22 12:10 bartekleon

disabling it is easy, you can do this:

pipe.safety_checker = lambda images, clip_input: (images, False)

All this did was disable the message, the image still came out black.

It actually worked on me. This line overwrites the safety_checker of StableDiffusionPipeline by returning the origin image, graceful and efficacious.

Just overwrite the safety_checker after where a StableDiffusionPipeline is initialized

Nukami avatar Oct 24 '22 17:10 Nukami

i agree a nsfw toggle would be nice, sometime just asking for a swimsuit trigger it.

I added a simple toggle for the txt2img.py script. #442

andresberejnoi avatar Nov 02 '22 21:11 andresberejnoi

disabling it is easy, you can do this:

pipe.safety_checker = lambda images, clip_input: (images, False)

Please note that this should be placed after calling pipe.enable_model_cpu_offload(). Otherwise, you may encounter an exception with the message "AttributeError: 'function' object has no attribute 'forward'".

polym avatar Mar 28 '23 01:03 polym

Just was looking through the code and found that we can set it to this without requiring a lambda. Though in the future, if they change this, then the lambda will probably be necessary.

StableDiffusionPipeline.from_pretrained(
    "./stable-diffusion-v1-4",
    revision="fp16", 
    torch_dtype=torch.float16
    safety_checker = None,
    requires_safety_checker = False
)

You can also change it later if necessary by doing this.

pipe.safety_checker = None
pipe.requires_safety_checker = False

JemiloII avatar May 02 '23 21:05 JemiloII

It seems my issue was the problem with GTX 1660 and numpy with fp16. So basically it was making these all green images, which were later marked by NSFW filter as NSFW and turned them black. It still is strange behaviour, but using the low memory fork with the higher quality fixed it

Could you share the link to the low memory fork? The issue persists with a GTX1650 as well.

https://github.com/basujindal/stable-diffusion It was this one. I had to add "--precision full" tho in order for it to work correctly. Although it does increase VRAM it is the only thing to make it work "If you have a Nvidia GTX series GPU, the output images maybe entirely green in color. This is because GTX series do not support half precision calculation, which is the default mode of calculation in this repository. To overcome the issue, use the --precision full argument. The downside is that it will lead to higher GPU VRAM usage." It should take ~3GB VRAM for 512x512

Sorry, can you mention how excatly what to do, I couldn't add "--precision full" this agrument. it will poop up syntaxerror: invaild syntax. Please help me.

1318980306 avatar May 24 '23 21:05 1318980306

@1318980306 Your graphics card is too old. I had to upgrade mine in order to use full.

JemiloII avatar Jun 22 '23 01:06 JemiloII

disabling it is easy, you can do this:

pipe.safety_checker = lambda images, clip_input: (images, False)

It keeps giving me this error

TypeError: 'bool' object is not iterable

adnanshussain avatar Jul 06 '23 07:07 adnanshussain

Hey, folks. I'm having the same issue as the above commenter, with the lambda approach throwing a TypeError because 'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.

pipe = StableDiffusionPipeline.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    safety_checker = None,
    requires_safety_checker = False
)

Any advice?

cleverNamesAreHard avatar Jul 08 '23 16:07 cleverNamesAreHard

Hey, folks. I'm having the same issue as the above commenter, with the lambda approach throwing a TypeError because 'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.

pipe = StableDiffusionPipeline.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    safety_checker = None,
    requires_safety_checker = False
)

Any advice?

I did the following and it worked for me

from diffusers.pipelines.stable_diffusion import safety_checker

def sc(self, clip_input, images) : return images, [False for i in images]

# edit the StableDiffusionSafetyChecker class so that, when called, it just returns the images and an array of True values
safety_checker.StableDiffusionSafetyChecker.forward = sc

adnanshussain avatar Jul 09 '23 05:07 adnanshussain

Hey, folks. I'm having the same issue as the above commenter, with the lambda approach throwing a TypeError because 'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.

I got the same error, but it went away by using this other reply

Just was looking through the code and found that we can set it to this without requiring a lambda. Though in the future, if they change this, then the lambda will probably be necessary.

StableDiffusionPipeline.from_pretrained(
    "./stable-diffusion-v1-4",
    revision="fp16", 
    torch_dtype=torch.float16
    safety_checker = None,
    requires_safety_checker = False
)

You can also change it later if necessary by doing this.

pipe.safety_checker = None
pipe.requires_safety_checker = False

This disabled de NSFW filter without any errors

adpadillar avatar Oct 27 '23 14:10 adpadillar

Hey, folks. I'm having the same issue as the above commenter, with the lambda approach throwing a TypeError because 'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.

I got the same error, but it went away by using this other reply

Just was looking through the code and found that we can set it to this without requiring a lambda. Though in the future, if they change this, then the lambda will probably be necessary.

StableDiffusionPipeline.from_pretrained(
    "./stable-diffusion-v1-4",
    revision="fp16", 
    torch_dtype=torch.float16
    safety_checker = None,
    requires_safety_checker = False
)

You can also change it later if necessary by doing this.

pipe.safety_checker = None
pipe.requires_safety_checker = False

This disabled de NSFW filter without any errors

This is currently the best solution. Thanks!

YuyangXueEd avatar Jun 23 '24 11:06 YuyangXueEd

I want to keep safety check but is there any way to replace the black image to another image? How to do it?

thuongvovan avatar Jul 12 '24 03:07 thuongvovan