tushardmaske
tushardmaske
Thank you very much for your response. The training started after I set "export NCCL_P2P_DISABLE=1" as suggested on [this ](https://discuss.pytorch.org/t/distributed-data-parallel-freezes-without-error-message/8009/28) site. But now, the problem is, it is not training...
File "pyorbbecsdk/examples/two_devices_sync.py", line 178, in main sync_config_json = multi_device_sync_config[serial_number] KeyError: 'CL8S93P007M'
cd into pyorbbecsdk and paste commands below in the terminal export PYTHONPATH=$PYTHONPATH:$(pwd)/install/lib/ sudo bash ./scripts/install_udev_rules.sh sudo udevadm control --reload-rules && sudo udevadm trigger OR you can put these commands in...
Hey, I am facing the same issue. Did you find any solution ? Best Regards