[Bug]: OPTIX_ERROR_UNKNOWN: Error initializing RTX library while rendering with LuisaRender
Bug Description
Hi there!
We're trying to set up Genesis on our system using Docker, and we seem to be getting a specific error while attempting headless rendering with LuisaRender that we do not observe while using the Rasterizer option for rendering.
Steps to Reproduce
We set up the Docker image as follows
git clone https://github.com/Genesis-Embodied-AI/Genesis
cd Genesis
docker build -t genesis -f docker/Dockerfile docker
docker run --gpus all --rm -it -e DISPLAY=$DISPLAY -v /dev/dri:/dev/dri -v /tmp/.X11-unix/:/tmp/.X11-unix -v $PWD:/workspace genesis
and then execute python examples/rendering/demo.py.
Expected Behavior
We expect to observe a series of strings such as "Running at 41.84 FPS", which is what we see when we switch the renderer to gs.renderers.Rasterizer(), and we are able to save the video to a .mp4 file by using cam_0.start_recording and cam_0.stop_recording to surround the scene.step() loop.
Screenshots/Videos
No response
Relevant log output
Compiling simulation kernels...
Building visualizer...
Resetting Scene <c1ea3d5>.
Running at 8008.25 FPS.
[console] [warning] Logs from OptiX (DEVICECTX): Error initializing RTX library
[console] [error] OPTIX_ERROR_UNKNOWN: Unknown error [/workspace/Genesis/genesis/ext/LuisaRender/src/compute/src/backends/cuda/cuda_device.cpp:1140]
0 [0x7f7f822158ee]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/liblc-backend-cuda.so :: luisa::compute::cuda::CUDADevice::Handle::optix_context() const + 638
1 [0x7f7f82289e25]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/liblc-backend-cuda.so :: luisa::compute::cuda::CUDAPrimitive::_build(luisa::compute::cuda::CUDACommandEncoder&) + 181
2 [0x7f7f822997cf]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/liblc-backend-cuda.so :: luisa::compute::cuda::CUDAMesh::build(luisa::compute::cuda::CUDACommandEncoder&, luisa::compute::MeshBuildCommand*) + 383
3 [0x7f7f82205b08]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/liblc-backend-cuda.so :: luisa::compute::cuda::CUDAStream::dispatch(luisa::compute::CommandList&&) + 200
4 [0x7f7f82216d0e]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/liblc-backend-cuda.so :: luisa::compute::cuda::CUDADevice::dispatch(unsigned long, luisa::compute::CommandList&&) + 94
5 [0x7f7f836b5574]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/liblc-runtime.so :: luisa::compute::Stream::operator<<(luisa::compute::CommandList::Commit&&) + 52
6 [0x7f7f839060d9]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/libluisa-render-base.so :: luisa::render::Geometry::_process_shape(luisa::render::CommandBuffer&, float, luisa::render::Shape const*, luisa::render::Surface const*, luisa::render::Light const*, luisa::render::Medium const*, luisa::render::Subsurface const*, bool, unsigned long) + 1753
7 [0x7f7f839082ea]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/libluisa-render-base.so :: luisa::render::Geometry::update(luisa::render::CommandBuffer&, ankerl::unordered_dense::v2_0_2::detail::table<luisa::render::Shape*, void, luisa::hash<luisa::render::Shape*>, std::equal_to<void>, luisa::allocator<luisa::render::Shape*>, ankerl::unordered_dense::v2_0_2::bucket_type::standard, eastl::vector<luisa::render::Shape*, eastl::allocator> > const&, float) + 362
8 [0x7f7f838cd348]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/libluisa-render-base.so :: luisa::render::Pipeline::update(luisa::compute::Stream&) + 792
9 [0x7f7f9c07b216]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/LuisaRenderPy.cpython-311-x86_64-linux-gnu.so :: unknown + 320022
10 [0x7f7f9c053c8c]: /opt/conda/lib/python3.11/site-packages/genesis/ext/LuisaRender/build/bin/LuisaRenderPy.cpython-311-x86_64-linux-gnu.so :: unknown + 158860
11 [0x5642b48bc276]: python :: unknown + 2069110
12 [0x5642b489975b]: python :: _PyObject_MakeTpCall + 667
13 [0x5642b48a6dda]: python :: _PyEval_EvalFrameDefault + 1802
14 [0x5642b48cbb4f]: python :: _PyFunction_Vectorcall + 383
15 [0x5642b48ab1b4]: python :: _PyEval_EvalFrameDefault + 19172
16 [0x5642b495e01d]: python :: unknown + 2732061
17 [0x5642b495d75f]: python :: PyEval_EvalCode + 159
18 [0x5642b497b6ca]: python :: unknown + 2852554
19 [0x5642b4977353]: python :: unknown + 2835283
20 [0x5642b498c8f0]: python :: unknown + 2922736
21 [0x5642b498c27c]: python :: _PyRun_SimpleFileObject + 444
22 [0x5642b498c014]: python :: _PyRun_AnyFileObject + 68
23 [0x5642b49865e3]: python :: Py_RunMain + 899
24 [0x5642b494d9d7]: python :: Py_BytesMain + 55
25 [0x7f824399ed90]: /usr/lib/x86_64-linux-gnu/libc.so.6 :: unknown + 171408
26 [0x7f824399ee40]: /usr/lib/x86_64-linux-gnu/libc.so.6 :: __libc_start_main + 128
27 [0x5642b494d88a]: python :: unknown + 2664586
Environment
Host Machine: Debian GNU/Linux 11 (bullseye) 8 NVIDIA RTX A6000 GPUs CUDA 12.6 Driver Version: 560.35.05
However, since we are using the Docker image, we have tried various Cuda versions from 12.1 (original Dockerfile version) to 12.6 (12.6 with the pytorch2.6.0 docker image) and consistently get this error.
Release version or Commit ID
e3b7e94213ce87ce2b6f750ee2138f04ff8ddd79
Additional Context
No response
Have you solved this problem? I have the same problem.
I have encountered the same issue. Has this bug been fixed? I also tried to install Optix manually which does not help.
For anyone encountering the same problem. This issue turns out to be related to docker environment. I try to build a docker image according to Dockerfile and this issue was resolved.