Disabling safety-model or fixing false positives?
I really have wanted to try this project so i tried using it with diffusers (default configurion for this one throws lack of memory and that one actually runs). I am getting "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." errpr all the time. With a default prompt. Or a "star" if "riding horse is NSFW". Is it possible to debug the runs it so the "handmade" safety feature is fixed or checked?
Or is there possibility to run this without this safety model or with it switched off?
Running default
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"./stable-diffusion-v1-4",
revision="fp16",
torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
prompt = "star"
with autocast("cuda"):
image = pipe(prompt).images[0]
image.save("star.png")
(tried seed 12345 if someone wants to try reproduce. Maybe my PC is not safe for work or something) Thanks in advance
i agree a nsfw toggle would be nice, sometime just asking for a swimsuit trigger it.
disabling it is easy, you can do this:
pipe.safety_checker = lambda images, clip_input: (images, False)
I also found the checker was inaccurately flagging some of my prompts as NSFW, so I disabled it in scripts/txt2img.py, by replacing check_safety with:
def check_safety(x_image):
# safety_checker_input = safety_feature_extractor(numpy_to_pil(x_image), return_tensors="pt")
# x_checked_image, has_nsfw_concept = safety_checker(images=x_image, clip_input=safety_checker_input.pixel_values)
# assert x_checked_image.shape[0] == len(has_nsfw_concept)
# for i in range(len(has_nsfw_concept)):
# if has_nsfw_concept[i]:
# x_checked_image[i] = load_replacement(x_checked_image[i])
# return x_checked_image, has_nsfw_concept
return x_image, False
It seems my issue was the problem with GTX 1660 and numpy with fp16. So basically it was making these all green images, which were later marked by NSFW filter as NSFW and turned them black. It still is strange behaviour, but using the low memory fork with the higher quality fixed it
disabling it is easy, you can do this:
pipe.safety_checker = lambda images, clip_input: (images, False)
All this did was disable the message, the image still came out black.
It seems my issue was the problem with GTX 1660 and numpy with fp16. So basically it was making these all green images, which were later marked by NSFW filter as NSFW and turned them black. It still is strange behaviour, but using the low memory fork with the higher quality fixed it
Could you share the link to the low memory fork? The issue persists with a GTX1650 as well.
It seems my issue was the problem with GTX 1660 and numpy with fp16. So basically it was making these all green images, which were later marked by NSFW filter as NSFW and turned them black. It still is strange behaviour, but using the low memory fork with the higher quality fixed it
Could you share the link to the low memory fork? The issue persists with a GTX1650 as well.
https://github.com/basujindal/stable-diffusion It was this one. I had to add "--precision full" tho in order for it to work correctly. Although it does increase VRAM it is the only thing to make it work "If you have a Nvidia GTX series GPU, the output images maybe entirely green in color. This is because GTX series do not support half precision calculation, which is the default mode of calculation in this repository. To overcome the issue, use the --precision full argument. The downside is that it will lead to higher GPU VRAM usage." It should take ~3GB VRAM for 512x512
disabling it is easy, you can do this:
pipe.safety_checker = lambda images, clip_input: (images, False)All this did was disable the message, the image still came out black.
It actually worked on me. This line overwrites the safety_checker of StableDiffusionPipeline by returning the origin image, graceful and efficacious.
Just overwrite the safety_checker after where a StableDiffusionPipeline is initialized
i agree a nsfw toggle would be nice, sometime just asking for a swimsuit trigger it.
I added a simple toggle for the txt2img.py script. #442
disabling it is easy, you can do this:
pipe.safety_checker = lambda images, clip_input: (images, False)
Please note that this should be placed after calling pipe.enable_model_cpu_offload(). Otherwise, you may encounter an exception with the message "AttributeError: 'function' object has no attribute 'forward'".
Just was looking through the code and found that we can set it to this without requiring a lambda. Though in the future, if they change this, then the lambda will probably be necessary.
StableDiffusionPipeline.from_pretrained(
"./stable-diffusion-v1-4",
revision="fp16",
torch_dtype=torch.float16
safety_checker = None,
requires_safety_checker = False
)
You can also change it later if necessary by doing this.
pipe.safety_checker = None
pipe.requires_safety_checker = False
It seems my issue was the problem with GTX 1660 and numpy with fp16. So basically it was making these all green images, which were later marked by NSFW filter as NSFW and turned them black. It still is strange behaviour, but using the low memory fork with the higher quality fixed it
Could you share the link to the low memory fork? The issue persists with a GTX1650 as well.
https://github.com/basujindal/stable-diffusion It was this one. I had to add "--precision full" tho in order for it to work correctly. Although it does increase VRAM it is the only thing to make it work "If you have a Nvidia GTX series GPU, the output images maybe entirely green in color. This is because GTX series do not support half precision calculation, which is the default mode of calculation in this repository. To overcome the issue, use the --precision full argument. The downside is that it will lead to higher GPU VRAM usage." It should take ~3GB VRAM for 512x512
Sorry, can you mention how excatly what to do, I couldn't add "--precision full" this agrument. it will poop up syntaxerror: invaild syntax. Please help me.
@1318980306 Your graphics card is too old. I had to upgrade mine in order to use full.
disabling it is easy, you can do this:
pipe.safety_checker = lambda images, clip_input: (images, False)
It keeps giving me this error
TypeError: 'bool' object is not iterable
Hey, folks. I'm having the same issue as the above commenter, with the lambda approach throwing a TypeError because 'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.
pipe = StableDiffusionPipeline.from_pretrained(
model_id,
torch_dtype=torch.float16,
safety_checker = None,
requires_safety_checker = False
)
Any advice?
Hey, folks. I'm having the same issue as the above commenter, with the lambda approach throwing a
TypeErrorbecause'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.pipe = StableDiffusionPipeline.from_pretrained( model_id, torch_dtype=torch.float16, safety_checker = None, requires_safety_checker = False )Any advice?
I did the following and it worked for me
from diffusers.pipelines.stable_diffusion import safety_checker
def sc(self, clip_input, images) : return images, [False for i in images]
# edit the StableDiffusionSafetyChecker class so that, when called, it just returns the images and an array of True values
safety_checker.StableDiffusionSafetyChecker.forward = sc
Hey, folks. I'm having the same issue as the above commenter, with the lambda approach throwing a
TypeErrorbecause'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.
I got the same error, but it went away by using this other reply
Just was looking through the code and found that we can set it to this without requiring a lambda. Though in the future, if they change this, then the lambda will probably be necessary.
StableDiffusionPipeline.from_pretrained( "./stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16 safety_checker = None, requires_safety_checker = False )You can also change it later if necessary by doing this.
pipe.safety_checker = None pipe.requires_safety_checker = False
This disabled de NSFW filter without any errors
Hey, folks. I'm having the same issue as the above commenter, with the lambda approach throwing a
TypeErrorbecause'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.I got the same error, but it went away by using this other reply
Just was looking through the code and found that we can set it to this without requiring a lambda. Though in the future, if they change this, then the lambda will probably be necessary.
StableDiffusionPipeline.from_pretrained( "./stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16 safety_checker = None, requires_safety_checker = False )You can also change it later if necessary by doing this.
pipe.safety_checker = None pipe.requires_safety_checker = FalseThis disabled de NSFW filter without any errors
This is currently the best solution. Thanks!
I want to keep safety check but is there any way to replace the black image to another image? How to do it?