debugPrintfEXT with %g formatting on GPU doubles, dvec2, dvec4 etc is broken
Environment:
- OS: Linux
- GPU and driver version: NVIDIA 535.86.05
Describe the Issue
Sorry for being brief, i dont have a ton of time these days.
layout(binding = 0) readonly buffer SegmentedCurveA
{
int numPoints;
dvec2 curve[];
} curveA;
//...
debugPrintfEXT("%g", curveA.curve[0].x);
Expected behavior
Double appears correctly
Additional context Note, that I believe the issue is in https://github.com/KhronosGroup/Vulkan-ValidationLayers/blob/4bd62a370f8ed8e5c7e349d345a665e2644b0c15/layers/gpu_validation/debug_printf.cpp#L175
varfloat is implemented on doubles using float * pointer casts on doubles which is incorrect.
So I looked into this now, so the GLSL_EXT_debug_printf spec says
Interpretation of the format specifiers is specified by the client API. The set of format specifiers is implementation-dependent, but must include at least "%d" and "%i" (int), "%u" (uint), and "%f" (float).
and I see we only ever added %lu and %lx for 64-bit unsigned ints, and 64-bit doubles were never added
So I guess is not "broken" and just "currently not supported". I do plan to get to this, just not sure how soon
so update on this, spent a lot of time this week on DebugPrintf
so the core issue is how this is we use snprintf on the CPU side (RenderDoc does as well) so there %f or %g works for 32-bit and 64-bit floats.... but in GPU world, not all devices even support 64-bit floats
the current implementation doesn't know if the contents are 32-bit or 64-bit because of the way we swap out the non-semantic SPIR-V op for a function call to do our own custom way to write out to the buffer
... basically this works on Renderdoc and really a VVL bug and needs to be fixed and I have tests, just will be a bit of re-working (which luckily is already needed to allow DebugPrintf to work with GPU-AV)