stable_diffusion.openvino icon indicating copy to clipboard operation
stable_diffusion.openvino copied to clipboard

Uncensor

Open chaos4455 opened this issue 3 years ago • 10 comments

the project is very good but it look has it have some censorship about nudity. Are possible to turn off al the filters from the models used?

chaos4455 avatar Sep 08 '22 08:09 chaos4455

I believe this implementation don't have any nudity filters built-in.

Mikhael-Danilov avatar Sep 08 '22 13:09 Mikhael-Danilov

@chaos4455 we don't use any safety checkers for output images, but we hope that users will not use our project to create inappropriate or illegal content.

bes-dev avatar Sep 08 '22 14:09 bes-dev

@bes-dev , thanks for reply. "but we hope that users will not use our project to create inappropriate or illegal content." I hope it too, unfortunately from my vision we have no control on that. You make a very grood work, very good.

Here an example. I use some input with sexy and sensual word as nude and i dont give any nude result.

Examples: Looks like the model try to censor nude parts or words relative to those themes.

output fgdgdf gfdgfd fdfdf

Otherwise, if we put something random the results look preety awesome.

sdf aaa ddd aaaa

chaos4455 avatar Sep 08 '22 19:09 chaos4455

fdf Here an example, the model try to hide the sexual body parts. Thanks again!

chaos4455 avatar Sep 08 '22 19:09 chaos4455

fdf Here an example, the model try to hide the sexual body parts. Thanks again!

This is because not a lot of explicit material included in laion dataset (on which SD was trained) (and probably it is additionally filtered before training SD). So SD as is not suitable as porn generator. (Architecture itself is pretty capable to do it, yet proper dataset and a lot of computing power required to achieve this goal)

Mikhael-Danilov avatar Sep 08 '22 20:09 Mikhael-Danilov

Thanks for the answer, is possible to edit the code of this project to use a more suitable dataset? i dont have fhurther klowledgement about this but if you tell me some good information i think im capable to do that. Are possible to use others dataset in this projetc? Can you please help me to do that? i see that when i start the project fisrt time it download a 3gb dataset. Are possible do force the project to use other more suitable to this?

chaos4455 avatar Sep 08 '22 21:09 chaos4455

There is no censorship happening here, I have folders full images containing artistic nudes, both intentional and accidental. I can easily assure you that there is nothing being censored:

image image image

....now with that said, this is not a random porn generation tool, and I'd hate to see problems arising from it's use as such. Honestly what you're likely seeing is a limitation of the training set of images, not intentional censorship.

Zoranvedek avatar Sep 08 '22 22:09 Zoranvedek

The original Laion5B dataset does have a "NSFW" score:

Additionally, we provide several nearest neighbor indices, an improved web interface for exploration & subset creation as well as detection scores for watermark and NSFW.

I do not have a reference for that, but I think the stable diffusion was trained on a subset of Laion5B with the NSFW tagged content removed (or better: not included). As such the model has "seen" less nudity and explicit content than it could have seen.

That said, it certainly has seen nudity and can be coerced into reproducing it with (un-)appropriate prompts.

@chaos4455 The thing is that it is not as simple as using "a more suitable dataset". There literally is no (open) alternative to the Stable Diffusion model that this project relies on. And (if I am not mistaken) that cost USD 600000 to train when it was first released.

(Sidenote: am I the only one not trying to produce porn with stable diffusion? :-) )

fhaust avatar Sep 09 '22 08:09 fhaust

Yeah, thanks, thats right: For the moment this is preety impressive. Im using ain i39100f and the performance is quite impressive too. arround 6,3 sec for instance. Thanks for the information you give to us.

chaos4455 avatar Sep 09 '22 15:09 chaos4455

However, fine tuning should be possible, and not as expensive as initial training. And we already have proof of it: https://github.com/harubaru/waifu-diffusion

Mikhael-Danilov avatar Sep 15 '22 23:09 Mikhael-Danilov