Make hash computation stable in x86 and x86_64 build
I found that hash value is different for 32bit and 64 bit build. In particular:
- RenderStateCacheImpl.cpp: ComputeDeviceAttribsHash()
- XXH128Hasher::Update(const ShaderCreateInfo& ShaderCI)
It brings lots of pain to reuse render state cache in desktop and wasm webgpu. I patched the hash value computation in my own build, webgpu render state cache finally works.
Maybe for anything that can be serialized to disk, the hash computation is better to be stable.
The cache was not intended to be reused across different platforms. However, why don't you build 32-bit desktop version?
Due to multi-process design, wasm32 WebGPU can use 4GB memory, and more than 4GB VRAM. 32bit desktop WebGPU can barely use 2GB memory address space.
So for medium size scene, render cache is warmup and saved in 64-bit desktop WebGPU. And reused on browser's wasm32 webgpu platform.
32-bit desktop WebGPU can also use more than 4GB VRAM.
So how did you make XXH128Hasher work across 32 and 64 bits?
I switch to 64-bit desktop WebGPU, after found scene rendering has memory failure on 32bit desktop webgpu build, works fine in wasm32 and 64bit desktop WebGPU.
It is strange, the scene renders fine in 32-bit d3d11 backend, memory failure with the dawn backend.
So how did you make XXH128Hasher work across 32 and 64 bits?
Calling hasher.Update() and convert some size_t fields to Uint32.
Calling hasher.Update() and convert some size_t fields to Uint32.
Ah, OK. That makes sense. The size_t is either 32 or 64 bits.
Can you send a PR with your changes?
Only, you should convert size_t to Uint64 instead to avoid losing data.
@WangHoi let me know if this works now or other size_t members also need to be handled.
Add a PR to make RenderStateCache's DeviceHash consistent. Tested after the above PR, webgpu renderstate cache can be reused between 32-bit and 64-bit build.