suggested intBitsToFloat() clarification
The docs already mention that passing bits that produce NaN results in undefined results. A case that isn't mentioned though is passing bits that produce a denormalized float, which would be incredibly common when working with bitmasks or just integers you want to store in a float output. Since 4.7.1 states that denormalized floats can be flushed to 0 after any operation, I would assume this includes intBitsToFloat(). Both AMD and Intel GPUs seem to suffer from this case, where intBitsToFloat(0x1) results in 0x0 in the output buffer. Since a user of intBitsToFloat() would be particularly concerned with cases like this, I think it would be helpful if the denormalized caveat was also mentioned.
Are you asking to add a statement like the following?
Passing bits that represent a denormalized value might cause the result to be flushed to 0.
I haven't looked yet if that language is consistent with the existing language; just asking if this is the right idea.
Yep, exactly. I think that clarification would be very helpful for this function.