Yian Wang
Yian Wang
Hi, I met a similar problem when I tried to use KuafuRenderer in Sapien 2.1.0.
My script is: ``` import sapien.core as sapien renderer_config = sapien.KuafuConfig() renderer_config.use_viewer = False renderer_config.spp = 64 renderer_config.max_bounces = 8 renderer_config.use_denoiser = True renderer = sapien.KuafuRenderer(renderer_config) print("done well?") ``` And...
It would report the same bug even when I use only one gpu. Also, I got this warning `/home/vipuser/miniconda3/envs/brax/lib/python3.8/site-packages/flax/core/frozen_dict.py:169: FutureWarning: jax.tree_util.register_keypaths is deprecated, and will be removed in a future...
I just realized that it might because some elements are nan and `nan == nan` is false. Then the replicated judgement might return false.
Yeah, it'll happen when I use the "generalized" backend. I've also tried to use the "positional" backend, which will work without this bug.
Also, I've tried to locate the position where NaN is made. It's after [this line](https://github.com/google/brax/blob/b373f5a45e62189a4a260131c17b10181ccda96a/brax/training/agents/apg/train.py#L148). So, I guess the gradient might explode in the back propagation process with "generalized" backend....
I haven't found the function to do this either. It's weird,,, I think genesis should have it. As a work-around, you can try to use `solver._kernel_set_particles_pos` function, for example: `...
@zswang666
What if you try this: ```import torch print(torch.cuda.is_available()) ``` If it's not available, maybe try to set your cuda path manually? ``` export CUDA_HOME=/usr/local/cuda-11.7 # change it to your cuda...
Saw something on twitter, maybe it will help? https://github.com/MizuhoAOKI/genesis_docker