L1bertad

Results 3 comments of L1bertad

Hello!Have you implemented it? I also want to know how to achieve this.

可以使用这个命令实现`CUDA_VISIBLE_DEVICES="1" python -m torch.distributed.launch --nnodes 1 --node_rank 0 --master_addr "127.0.0.1" --nproc_per_node 1 --master_port 29500 tools/train.py configs/xxx --seed 0 --launcher pytorch ` 其中CUDA_VISIBLE_DEVICES设置为对应的gpu-id,nproc_per_node 设置为对应gpu数量

I also want to know it.