[SPIR-V] Casting 16bit integral data types
Description
DXC generates invalid spir-v code when casting 16bit data types.
Steps to Reproduce
https://godbolt.org/z/5vKcqxT76
Compile the following shader with options -T lib_6_4 -spirv -fspv-target-env=vulkan1.2 -enable-16bit-types -HV 2021
[[vk::binding(0,4)]] RWStructuredBuffer<uint16_t> a;
[shader("raygeneration")]
void main() {
int16_t v = -1;
uint16_t vu = uint16_t(v);
a[0] = vu;
}
Actual Behavior Shader compilation fails with
fatal error: generated SPIR-V is invalid: The high-order bits of a literal number in instruction
15 must be 0 for a floating-point type, or 0 for an integer type with Signedness of 0, or sign extended when Signedness is 1 %ushort_4294967295 = OpConstant %ushort 4294967295
Note that both
void main() {
uint16_t vu = uint16_t(-1);
a[0] = vu;
}
and
void main() {
uint16_t vu = uint16_t(int16_t(-1));
a[0] = vu;
}
results in compiling to the expected result of storing an ushort constant of 65535 (2^16-1) to a[0], as would happen with a regular underflow. Also note that the generated constant in spir-v resembles 2^32 - 1. It seems like the underflow is incorrectly handled with 32bit precision instead of 16bit precision.
Environment
- DXC version: libdxcompiler.so: 1.8(dev;1-df588beb)
- Host Operating System: Compiler Explorer
I believe this is actually an issue with how SPIRV-Tools folds the OpBitcast instruction for unsigned 16-bit integers. I've sent in a PR to fix it upstream.