Daemon icon indicating copy to clipboard operation
Daemon copied to clipboard

Designing and implementing a migration path for porting assets to the linear pipeline

Open illwieckz opened this issue 7 months ago • 17 comments

So, the linear pipeline has been merged:

  • https://github.com/DaemonEngine/Daemon/pull/1687

So for now the engine supports:

  • rendering the old way maps done the old way (messing with colorspaces like Q3/Tremulous), also called “the naive pipeline”.
  • rendering the new way maps done the new way (properly handling colorspace conversion at every step of the baking and rendering pipeline), also called “the linear pipeline”.

It means maps already built are just rendered like before, but contributors can start creating brand new content targeting the new linear pipeline without having to care about the pitfalls and workaround and weirdness and painful wizardry of the old naive pipeline.

But what we don't care about yet, is about migrating assets from the old naive pipeline to the new linear one.

For example we know existing textures meant to be used in blending operations were wrongly calibrated for the old naive and broken pipeline. When used in the new, correct and linear pipeline, they don't render properly because they were biased to workaround bugs that don't exist anymore.

We may want to provide some kind of conversion or specially crafted workaround for when a blended asset from one pipeline is used on the other.

This would help to solve two problems:

  1. Modifying many assets is painful and tedious, if we can just flag either all old assets or old new assets to tell it was meant for a different computation, without actually checking and modifying assets one by one, we could port all of them the blind way just by doing some flagging and letting the engine do the magic.
  2. Because we provide reusable texture packs, some textures may be used in maps that will never be ported and also be used in maps that will. So having ways to workaround the usage from one way to another would help. The Unvanquished maps (including community maps), can probably be ported at some point as we either have source or authors are still around, but that's only a hope, we cannot enforce that. Also the problem is stronger for a package like res-tremulous where some legacy maps will never be rebuilt (no source, no author around), but some of other legacy maps may be ported to the new pipeline and use those legacy assets.

illwieckz avatar Aug 01 '25 10:08 illwieckz

Well, now that I posted it, I notice I could have posted it to the Unvanquished issue tracker instead (it's more a project management thread), but 95% of the solutions will likely be some technical engine implementation anyway.

illwieckz avatar Aug 01 '25 10:08 illwieckz

I propose adding 2 mechanisms for defining colorspace-dependent shaders:

1. Stage-level enablement based on matching rendering mode

Let's say we wanted to adjust the strength of the additive stage here depending on the rendering mode. The colorspace naive stage would be disabled when using linear blending and vice versa.

textures/shared_ex/light2_red_6000
{
	qer_editorImage textures/shared_ex_src/light2_d

	q3map_surfaceLight 6000
	q3map_lightRGB 1.000 0.424 0.380

	{
		diffuseMap textures/shared_ex_src/light2_d
		normalMap textures/shared_ex_src/light2_n
		heightMap textures/shared_ex_src/light2_h
		specularMap textures/shared_ex_src/light2_s
	}
	{
		colorspace naive
		map textures/shared_ex_src/light2_a
		blend add
		red 0.6
		green 0.4
		blue 0.3
	}
	
	{
		colorspace linear
		map textures/shared_ex_src/light2_a
		blend add
		red 0.8
		green 0.6
		blue 0.45
	}
}

2. Shaders which can only be loaded in a particular rendering mode

Let's say you're running in linear mode. Then if you try to load a (3D mode) shader named textures/shared_ex/light2_red_6000, the renderer should first attempt to look up textures/shared_ex/light2_red_6000:linear, and only if that doesn't exist, try the unsuffixed textures/shared_ex/light2_red_6000. If RSF_2D was used, the suffixed form should be skipped.

textures/shared_ex/light2_red_6000:naive
{
    ...
}

textures/shared_ex/light2_red_6000:linear
{
    ...
}

In most cases, method 1 (stage-level enablement) should probably be used to minimize the amount of typing. But method 2 could come in handy in certain scenarios:

  • Override shaders in another pak
  • Prevent a shader from being loaded at all in the wrong colorspace
  • Cleaner code for very complex shaders which can't share any stages

slipher avatar Aug 01 '25 20:08 slipher

I like the idea of the colorspace stage keyword.

We may define that by default the colorspace is naive, because of the fact all of our assets are naive, so porting them would just mean implementing the feature, but not flagging them. And then rebuilding any legacy map with the new pipeline would make the engine use the naive way for blended textures without extra effort.

I'm proposing a similar mechanism (new shaders using new thing use a specific keyword) for q3map2 and the point/area backsplash light problem: https://gitlab.com/xonotic/netradiant/-/merge_requests/220

I also wonder if there can be a way to define a shader-wide-option, like in a special comment (to not break third-party softwares like the various Radiant editors):

// SOMEMAGICKEYWORD colorspace linear

The SOMEMAGICKEYWORD token would be be defined in a way a real comment cannot be confused with such global setting by mistake.

illwieckz avatar Aug 01 '25 20:08 illwieckz

I like the idea of the colorspace stage keyword.

We may define that by default the colorspace is naive

A default of naive for the stage-level colorspace? That would be bad since then we would have to have 2 versions of every stage in every shader, for all shaders we want to use in both modes. My idea was that by default, a stage would be rendered in all modes.

I also wonder if there can be a way to define a shader-wide-option, like in a special comment (to not break third-party softwares like the various Radiant editors):

// SOMEMAGICKEYWORD colorspace linear

That's a good point that the suffixed names could break the map compiler. Though maybe we could still get that to work if instead of defining dual variants like foo:linear + foo:naive, we did it like foo + foo:naive or foo:linear + foo. I was trying to come up with a way that would easily work with our shader hash table that doesn't parse the inside of a shader until it's loaded. But with a bit more effort, we could implement a version that stores multiple variants and backs out upon seeing a forbidden colorspace. Why would it need to be a magic comment instead of a normal stage-level keyword though? Surely q3map2/radiants don't just refuse to process the shader if they see an unknown keyword?

Of course the map compiler won't understand stage-level enablement either, but that's probably OK? Can we assume that q3map2 ignores all shader stage contents, other than using the first colormap for light bouncing?

slipher avatar Aug 01 '25 21:08 slipher

A default of naive for the stage-level colorspace? That would be bad since then we would have to have 2 versions of every stage in every shader, for all shaders we want to use in both modes. My idea was that by default, a stage would be rendered in all modes.

The idea is that operations that would be different in naive or linear pipeline would be assumed naive otherwise.

So a stage with no blending would be linear, a stage with blending would be naive, unless colorspace linear is used. So maybe another name like blendingcolorspace would be better.

I don't see the point of rendering an opaque stage the naive way if the lightmap was done the linear way, so only blended stuff would have a meaning of being naive or linear.

I also wonder if there can be a way to define a shader-wide-option, like in a special comment (to not break third-party softwares like the various Radiant editors):

// SOMEMAGICKEYWORD colorspace linear

That's a good point that the suffixed names could break the map compiler. Though maybe we could still get that to work if instead of defining dual variants like foo:linear + foo:naive, we did it like foo + foo:naive or foo:linear + foo. I was trying to come up with a way that would easily work with our shader hash table that doesn't parse the inside of a shader until it's loaded. But with a bit more effort, we could implement a version that stores multiple variants and backs out upon seeing a forbidden colorspace. Why would it need to be a magic comment instead of a normal stage-level keyword though? Surely q3map2/radiants don't just refuse to process the shader if they see an unknown keyword?

I believe q3map2 / radiant skip unknown keywords (unlike legacy renderers that stupidly stops on unknown token… we had to fix that in our engine).

Of course the map compiler won't understand stage-level enablement either, but that's probably OK? Can we assume that q3map2 ignores all shader stage contents, other than using the first colormap for light bouncing?

Radiant and Q3map2 parsing of shaders is much more simplistic than you think it is. 😁

Both Radiant and Q3map2 entirely skip the stages.

All Radiant is reading are the qer_ keywords, qer means QuakeEdRadiant, from QuakeEd the NextSTEP in-house Quake Editor by id Software, so basically qer keywords is for the editor, whatever the exact editor name. It also reads the surfaceparms but that's all.

Then all the map compiler reads are the q3map_ keywords, yet again, whatever the exact map compiler name. And it also reads the surfaceparms keywords. And it makes an exception, it also reads the qer_editorImage to sample the colors from the said texture, this is to sample something when the shader name itself is not the name of an actual file (exactly like the editor does). This is also why we can't put anything stupid in the editor image as it may be used to sample colors from light shaders or when light is bouncing on it. For example at some point we had a tool that created jpg previews for CRN files painting ping in alpha pixels for easy notice in editor, but then then q3map2 was sampling the pink color when bouncing the light on windows and grates. 🤡

Both the editor and the map compiler have no idea about what renderers do with stages. In some way it helped to keep the tools compatibles with many games while the renderers were all doing there own things without concertation. NetRadiant has some advanced stage parsing for Doom3 where it can mimic the Doom3 renderer (and parse normal maps, for example), but that's an exception. The usual way to render things in those editor is just to render the textures fullbright without any special effect, and the editor doesn't parse any blend operation, so we just workaround it with qer_trans to just give some transparency so we can actually notice in editor the surface is not opaque, and poorly emulate windows and such.

But, if I assume the map compiler and the editor skip unknown tokens (and very likely do it silently) within shaders, I'm not assuming they will skip unknown tokens outside of shaders. Any token that is outside first level braces is considered a shader name that should be followed by braces.

illwieckz avatar Aug 01 '25 22:08 illwieckz

But, if I assume the map compiler and the editor skip unknown tokens (and very likely do it silently) within shaders, I'm not assuming they will skip unknown tokens outside of shaders. Any token that is outside first level braces is considered a shader name that should be followed by braces.

What about a special shader name that would have these settings? Something like __daemon_light_settings or whatever. Assuming editor/q3map2 won't just throw it out as unused.

VReaperV avatar Aug 01 '25 23:08 VReaperV

I also thought about it, but editors may still uselessly process them at some point.

NetRadiant only lists shaders that starts with textures/ because the map format strips the textures/ prefix when applying them on brushes and surfaces and storing them in Q3 maps, but it still loads all shaders whatever their names because models can get any path applied on their surfaces.

Some editors like DarkRadiant may list all paths (D3 map format may have stopped strippingthe textures/ prefix, stripping that was probably inherited rom Quake 1).

illwieckz avatar Aug 01 '25 23:08 illwieckz

Assuming editor/q3map2 won't just throw it out as unused.

Editors and q3map2 don't edit shaders. Q3map2 can generate the light style shaders, but no shader is edited by the software.

illwieckz avatar Aug 01 '25 23:08 illwieckz

Such configuration header could be done that way:

__config
{
	__linearblend 1
}

Most tools should simply ignore that, at worst print some warnings for unknown keywords (though I doubt about it as the tools are meant to be permissive because of many forks), and uselessly list the shader in some cases.

illwieckz avatar Aug 01 '25 23:08 illwieckz

@slipher Thanks for #1721!

But I don't like the idea of what is naive or linear being the colorspace.

The blending operation is naive or linear,

The image colorspace is sRGB or linear.

What if we extend the stage map keyword?

Like:

textures/castle/brick
{
	{
		diffuseMap sRGB textures/castle/brick
	}
	{
		colorMap linear textures/castle/dirt
		blendFunc blend
	}

}

The good thing is that keywords like diffuseMap in stages are specific to Dæmon (Doom 3 only supports them at the root of the shader, without stage blending), so we are free to extend that and we will break nothing, even not the NetRadiant DOOM 3-like renderer that already doesn't support those stage (if one day we would rely on it to get preview of the game, we would have to implement the multitextured stage to begin with).

To not break software that may render the legacy stages with map keyword, we can rely on the colorMap one instead.

So basically if the first word is sRGB or linear, then set the format then reads the texture name, otherwise it's a texture name.

illwieckz avatar Aug 01 '25 23:08 illwieckz

Keywords that would receive this can be colorMap, diffuseMap and glowMap, I guess…

illwieckz avatar Aug 02 '25 00:08 illwieckz

Welp, I misread the PR, I thought the colorspace keyword was for configuring the image loading, but it's for skipping a whole stage entirely. For such “skip the stage” feature then we better use some operation-related name than “colorspace”.

illwieckz avatar Aug 02 '25 00:08 illwieckz

How about "blendspace"? The "space" suffix gives a hint that it is related to colorspaces. And "blend" because shaders with blending is where it makes most of the difference.

slipher avatar Aug 02 '25 04:08 slipher

How about "blendspace"? The "space" suffix gives a hint that it is related to colorspaces. And "blend" because shaders with blending is where it makes most of the difference.

Why not, but before defining keywords, we better define what are the solutions and workarounds we may implement to provide compatibility for one pipeline or another.

For example:

  • When the engine finds a blended stage that was calibrated with the naive pipeline, what are the operations it can do to reuse its components (images, rgb values…) in the linear pipeline to minimize the weirdness?

Even if not exact or not perfect, if we can find “good enough” workarounds for these problems it would make maintenance very easy, just by stating for which pipeline the stage was calibrated for.

For example, if we have a linear stage and an naive stage, can we render the linear stage like the linear pipeline does, but then render the naive stage in a framebuffer flagged as sRGB, with textures kept in sRGB, so the first stage output is delinearized to blend with the naive stage, then the result is relinearized again to be blended with other things or fed to the camera GLSL shader?

Edit: I meant naive, not native (typo).

illwieckz avatar Aug 06 '25 00:08 illwieckz

  • When the engine finds a blended stage that was calibrated with the naive pipeline, what are the operations it can do to reuse its components (images, rgb values…) in the linear pipeline to minimize the weirdness?

I doubt there's much we can do here. See https://github.com/DaemonEngine/Daemon/pull/1687#issuecomment-3060496794 for how some of the simpler blend functions behave in naive blendspace. Naive blendfunc add produces a result that is systematically too large, but the relative 'error' is more when the color inputs are close together. blendfunc blend produces a result that is systematically too small, but the relative error is more when the color inputs are far apart. I don't believe there is any good approximation for these functions using the built-in OpenGL blend modes in linear mode. For a blendfunc add shader running naively, the amount of light that is added, linearly speaking, depends on the environment: the brighter the map, the more light is added. So to get a result that looks the same in linear blendspace as it did before, the choice of shader needs to take the brightness of the map location into account.

And that's just a couple of the simplest, most common blend functions. Legacy maps using a bewildering variety of blend modes for glass textures. A lot of them don't make much sense, e.g. GL_DST_COLOR GL_SRC_ALPHA (used with an RGB texture so GL_SRC_ALPHA just means 1), which enhances the brightness of objects behind the glass, which physically makes no sense. That one actually could be emulated in linear mode if we rewrote the values in the texture (since it behaves like a blendfunc filter but where the texture values range from 1-2 instead of 0-1), but I don't see the point since it's nonsense anyway. I think we should pick 1 or 2 sane blending operations for glass, with adjustable constants to tune the brightness, and just adjust those for each map to get something that looks about the same.

One place where I do think we could use some automatic adjustment is specular maps. Specularity seems to be much weaker in linear mode, so maybe we could apply an automatic adjustment to the specular scale so that it looks about the same between blendspaces.

For example, if we have a linear stage and an naive stage, can we render the linear stage like the linear pipeline does, but then render the naive stage in a framebuffer flagged as sRGB, with textures kept in sRGB, so the first stage output is delinearized to blend with the naive stage, then the result is relinearized again to be blended with other things or fed to the camera GLSL shader?

This sounds a bit like my proposals for rendering mixed naive blendspace content and linear blendspace content in the forum thread, which you were opposed to. Have you changed your mind?

slipher avatar Aug 06 '25 08:08 slipher

I did some tests by hacking the GLSL shaders and converting back blended images to sRGB and that gives acceptable results. Things may look a bit different, but for example the translucency of windows become correct, this gives very good results in forlorn for example. The color of force-fields may differ a bit, but at least they aren't almost transparent like now. Also, the render is expected to be a bit different in linear space even if the assets were made for the linear pipeline anyway.

So here is what we can do:

  • rewrite the shader parsing code to delay the loading of all images to the end of the parsing, so we know if a stage is blended or not at the time we upload the colormap,
  • implement the mechanism that makes sure that when a stage is blended, the colormap is uploaded without transformation (no linearization), so legacy assets work in maps rebuilt the non-legacy way,
  • implement a new blend keyword, like linearBlend, that tells the image should be linearized, for non-legacy blend assets to be used in rebuilt maps.

I'm not sure we have to care about the non-legacy blend assets to be loadable in maps built the legacy way, but can we probably implement the opposite if we want to care about it: by flagging the image for linearization.

illwieckz avatar Oct 09 '25 07:10 illwieckz

If we are really interested in getting a bunch of maps with sRGB lightmaps out as soon as possible, it's still worth considering trying sRGB lightmaps + naive blending as an intermediate step on the way to a fully linear pipeline. (Cf. https://github.com/DaemonEngine/Daemon/pull/1855#issuecomment-3388650542 where I suggest this possibility, but only as a debugging tool.) I have already demonstrated this idea on the slipher/srgb-map-old-colorspace branch. When rendering only opaque surfaces, it's practically indistinguishable from the fully linear blend regime. Doing it this way makes it easy to start using sRGB lightmaps with very low risk of fucking up translucent surfaces.

I'm curious to test whether Xonotic is actually doing this, as opposed to a fully linear rendering pipeline, but I've had trouble finding a Xonotic map that actually makes good use of transparency...

slipher avatar Oct 10 '25 08:10 slipher