picom icon indicating copy to clipboard operation
picom copied to clipboard

Noticeable gradient banding for high blur strength

Open bakana808 opened this issue 5 years ago • 7 comments

Hi everyone,

When using the dual_kawase blur with strength = 20, there is a noticeable gradient banding (or "color banding") as a result of my monitor not being able to represent all the intermediate colors in the blur:

(I can see it in this screencap but I'm not sure how hard it is to see for anyone else)

I was wondering if there was a way to alleviate this, perhaps with some kind of dithering effect that is applied after the blur or the ability to set an overlay image over the transparency so that I could overlay noise (which is probably what Windows 10 does in their Fluent apps to solve the same problem)

bakana808 avatar Mar 13 '21 02:03 bakana808

that's a good idea. i think it's not that hard to do dithering on blurred image with a shader.

absolutelynothelix avatar Mar 15 '21 07:03 absolutelynothelix

Bump, having the same banding unknown

DeathKhan avatar Sep 05 '21 10:09 DeathKhan

Especially eye-catching on darker images, seems to be an even bigger issue with dark window backgrounds (a blending issue?).

PickNicko13 avatar Nov 12 '21 07:11 PickNicko13

These color bands are especially apparent at large blur-radii which results in very fine gradients. More noticeable on darker images (usually the case with transparent background).

  • Are we okay with just providing configurable random-noise dithering to the dual_kawase method or should this apply to the opengl-blur in general? Something like --blur-noise-strength in range [0, 1].
  • Should the xrender backend get noise dithering as well? A visually similar implementation has i significant negative impact on performance. Since large blur-radii don't make much sense in real-world use-cases, I think we can skip this one.
  • Similarly, I think an implementation for the legacy backends isn't really required.

tryone144 avatar Nov 12 '21 13:11 tryone144

@tryone144, i think it definitely should be configurable, because no matter how you implement it, it will affect performance, and for some machines (old thinkpads, etc.) it may be noticable. i don't think that there is a need to implement it for xrender (idk, who uses it, kinda fallback for very old machines?) and for legacy backends (since sooner or later they will be removed afaik).

absolutelynothelix avatar Nov 17 '21 16:11 absolutelynothelix

Ping @yshui:

  • Are we okay with an implementation only in the experimental backends? (I think yes)
  • Are we okay with only implementing noise dithering for the glx backend? (As mentioned before, visually pleasing dithering with xrender is hard to do and has negative impact on performance)
  • Do we want this to be blur (or even blur-type) specific or allow noise-dithering of the background for all windows? The latter does require additional changes in the backend interface. (I think, it's fine to implement this as part of the blur step)

tryone144 avatar Feb 05 '22 20:02 tryone144

@tryone144

  1. Agreed, yes.
  2. I think it's fine, high blur strength would be really slow on xrender already.
  3. I don't have a preference here. Pick the one that's easier to do, maybe?

yshui avatar Feb 05 '22 23:02 yshui

@tryone144 i just thought of this, i think before we do any dithering, we should try using 16bit textures to store the intermediary results

Edit: never mind, didn't help.

yshui avatar Nov 28 '22 20:11 yshui

Did some research on dithering algorithms. Ordered dithering looks... poor, and error-diffusion is tricky to implement on GPU.

mpv is using a compute shader for error diffusion, maybe we could consider requiring OpenGL >= 4.3 for dithering.

yshui avatar Nov 28 '22 23:11 yshui

@yshui, maybe you don't remember me already, but you've awakened me :D

i resurrected my arch installation that i haven't booted for about two years and maybe i'll try to play with this since i had some experience with dithering (floyd-steinberg in particular, but afaik there are issues with implementing it as a shader) and i'm interested in this issue (at least for now, so no promises + i don't have much free time lately).

absolutelynothelix avatar Nov 29 '22 00:11 absolutelynothelix

the first, dumb and pretty straightforward, approach is to apply some noise. it won't get rid of color banding completely, but i can see the difference (and it adds kinda aesthetic effect). it could be a fallback for old devices (if mpv's approach will be used and opengl >=4.3 will be required). zzz

absolutelynothelix avatar Nov 29 '22 03:11 absolutelynothelix

the first, dumb and pretty straightforward, approach is to apply some noise. it won't get rid of color banding completely, but i can see the difference (and it adds kinda aesthetic effect). it could be a fallback for old devices (if mpv's approach will be used and opengl >=4.3 will be required). zzz

While that approach may seem dumb/naïve at first glance, I think it probably would be one of the better ways to go about doing it.

Unordered dithering algorithms such as Floyd-Steinberg will probably not produce good results because of "jittering" artifacts that are a result of the unpredictable error diffusion. The addition or movement of a single pixel will cause a large region of the image to be affected.

The following GIF shows the artifacts that would appear when using a non-ordered dithering algorithm such as Floyd-Steinberg: floyd steinberg This is from Joel Yliluoma's page on his dithering algorithm: here. From that page:

... A single yellow pixel was added to the image and moved around. The animation has been quantized to 16 colors and dithered using Floyd-Steinberg dithering. An entire cone of jittering artifacts gets spawned from that single point downwards and to the right.

This is not a problem with ordered dithering algorithms (i.e. Bayer dithering), but ordered dithering instead creates a very noticeable dithering pattern across the image and generally I think most would agree that it wouldn't fit very well on a desktop: (see bottom-right image)

ordered

The "dumb" approach of simply overlaying a noise texture and have it repeat over the blurred regions will not suffer from these issues, and should still be effective in reducing gradient banding. Additionally, this approach should be much more performant than trying to do dithering in real time. Of course for this to work properly, the noise texture should be large enough and random enough such that no seams are visible when tiled.

Perhaps the difficult thing to work out would be how to actually blend the noise texture in--whether it should multiply with the pixels, subtly added on top, etc.

Also, I believe KDE's compositor (Kwin) uses noise to reduce colour banding. You may want to check it out for ideas. From what I've seen it seems to achieve some pretty decent results.

mikejzx avatar Nov 29 '22 05:11 mikejzx

@mikejzx, yep, i did some research on this topic overnight and it's generally a bad idea to do real-time dithering based on image contents (due to performance issues and artifacts, as you mentioned), it's better to apply some random noise. from what i've found blue noise is a good source of non-distracting noise for dithering.

absolutelynothelix avatar Nov 29 '22 05:11 absolutelynothelix

i'm not that good at glsl, but here is a snippet i used on shadertoy to demonstrate noise applying:

const float NOISE_STRENGTH = 0.015; // 3.5 / 255.0

float random(in vec2 coords) {
    return fract(sin(dot(coords.xy, vec2(12.9898, 78.2330))) * 43758.5453);
}

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = fragCoord / iResolution.xy;
    vec4 col = texture(iChannel0, uv);

    fragColor = col + mix(-NOISE_STRENGTH, NOISE_STRENGTH, random(uv));
}

and the "texture" i used to test it on (load with https://www.shadertoy.com/view/lsGGDd):
https://i.ibb.co/PC3x3Sr/bands.png

(source, i've adapted and simplified it a bit)

absolutelynothelix avatar Nov 29 '22 06:11 absolutelynothelix

@yshui

Edit: never mind, didn't help.

That's what I expected. The banding occurs when drawing the final image to the screen. Doesn't matter if we have higher precision internally.

@mikejzx already mentioned the relevant points. :smile:

  • Dithering either produces visible artifacts (ordered) or visual unrest (error distribution)
  • Distributed noise is "simple" and sufficiently disturbs the regular pattern of the color bands
  • KDE uses a noise texture that gets "added" to the pixel values (centered around 0)

@mighty9245 interesting approach with calculating a random value for each pixel. I have played with a pre-calculated noise texture (screen-dimensions, turbulence noise) and adding the noise value (distributed around 0) to the fragment color. The main drawback was the relatively expensive computation to generate this texture (and the added memory consumption).

TL;DR: Adding/subtracting some kind of random noise should be the easiest approach with a pleasing visual appearance. Dithering probably introduces more artifacts than the additional computational overhead is worth.

Regardless of what we choose, the xrender backend is more than likely too slow for any of the above approaches.

tryone144 avatar Nov 29 '22 23:11 tryone144

I did say I don't like how ordered dithering looks, but I will try to see if I can put something together quickly to see what it looks like in practice.

Computationally ordered dithering should be pretty cheap.

yshui avatar Nov 30 '22 03:11 yshui

Actually, I was wrong, it looks pretty decent. See the dither branch.

The approach I've taken is using a 16-bit intermediary back buffer (the ->back_texture), and apply dither at the final step. Otherwise, if I only use dither in blur shaders, other composition work would still cause precision loss (like alpha blending, etc.), and banding would still be visible.

With this approach I don't see any banding anymore and dithering is also unnoticeable. (i have a 4k screen, so maybe that helps). I don't know how expensive this is though.

yshui avatar Nov 30 '22 04:11 yshui

fun side effect: i can use this to emulate a low color depth screen! :smile:

(this is what blur looks like on a 8-color screen, i turned up dither grain size as well) a

yshui avatar Nov 30 '22 04:11 yshui

hmm, btw, why do blur textures have an alpha channel?

yshui avatar Nov 30 '22 05:11 yshui

i decided to test the dithering and i believe it's enabled with --dithered-present but it fails for me with

workstation:~/Downloads/picom/build/src$ ./picom --dithered-present
[ 12/01/2022 21:21:03.279 gl_init ERROR ] Framebuffer attachment failed at line 929: GL_FRAMEBUFFER_UNSUPPORTED
[ 12/01/2022 21:21:03.279 glx_init ERROR ] Failed to setup OpenGL
[ 12/01/2022 21:21:03.280 initialize_backend FATAL ERROR ] Failed to initialize backend, aborting...

am i having a bad gpu (gtx 1650) or what?

absolutelynothelix avatar Dec 01 '22 18:12 absolutelynothelix

Hmm, does nvidia not support 16-bit texture for framebuffers? i thought it's a required format.

OK, only RGBA16 is required, RGB16 is not. I need to check both of them.

yshui avatar Dec 01 '22 18:12 yshui

can you upload a trace from apitrace?

yshui avatar Dec 01 '22 18:12 yshui

yep, i've used my favorite debug tool called commenting out things and after commenting out this it, at least, started, but i don't see any dithering (doesn't works without it or unnoticable?)

absolutelynothelix avatar Dec 01 '22 18:12 absolutelynothelix

can you upload a trace from apitrace?

sure, but i've never used apitrace before. iirc there was an instruction to debug picom with apitrace, can you link it if it exists? or i'll figure it out on my own a bit later

absolutelynothelix avatar Dec 01 '22 18:12 absolutelynothelix

let me try something first before you run apitrace.

yshui avatar Dec 01 '22 18:12 yshui

@mighty9245 ok, can you try again with the latest change?

yshui avatar Dec 01 '22 18:12 yshui

@yshui, fixed :) [ 12/01/2022 22:31:42.531 gl_init INFO ] Using back buffer format 0x805b

and a difference showcase from an average fhd screen enjoyer: Untitled

incredible, i can't even notice dithering, it just removes the banding and thats it. very good.

also, if someone wants to use my noisy approach, maybe for some kind of aesthetics, i believe you can put it into a custom shader.

absolutelynothelix avatar Dec 01 '22 19:12 absolutelynothelix

we could add a custom present shader. by the looks of it it would be simple to do.

yshui avatar Dec 01 '22 21:12 yshui

The approach I've taken is using a 16-bit intermediary back buffer (the ->back_texture), and apply dither at the final step. Otherwise, if I only use dither in blur shaders, other composition work would still cause precision loss (like alpha blending, etc.), and banding would still be visible.

Nice :+1: Compositing with a higher depth-resolution and only applying dithering when compressing down to 8bits for drawing the final result sounds logical. This should keep the visual fidelity and disturb color steps to be unnoticeable. Otherwise we would just use higher precision internally but loose that when presenting (and still have colorbanding), or the dithering would be just a different kind of noise to disturb the color bands.

also, if someone wants to use my noisy approach, maybe for some kind of aesthetics, i believe you can put it into a custom shader.

Since it's self-contained, this should be no problem once we have a way to specify a full-screen present shader.

we could add a custom present shader. by the looks of it it would be simple to do.

Adding --present-shader analog to --window-shader-fg? Config refresh causes backend re-init, so that should work fine.

tryone144 avatar Dec 02 '22 00:12 tryone144

Closing as it's been merged to next

yshui avatar Dec 03 '22 18:12 yshui