Switchable-Normalization icon indicating copy to clipboard operation
Switchable-Normalization copied to clipboard

Problems about Usage of SyncSN

Open stillwaterman opened this issue 6 years ago • 12 comments

Very nice work! I try to use your train code in face_recognition, but I met some problems. Frist, rank = int(os.environ['RANK']) and world_size = int(os.environ['WORLD_SIZE']) don't have values, so I added some code os.environ['RANK']=str(0), os.environ['WORLD_SIZE']=str(4). Is that right? Second, my code is stuck at dist.broadcast, it doesn't have any error message, just stuck. Could you give me some advice

stillwaterman avatar Mar 31 '19 02:03 stillwaterman

What problem? I will give a example soon.

JiaminRen avatar Apr 01 '19 07:04 JiaminRen

@JiaminRen my code is stuck at dist.broadcast, no error message, backend is nccl. Do you test the train code or what configuration I didn’t do

stillwaterman avatar Apr 01 '19 07:04 stillwaterman

which task did you test? imagenet or face recognition?

JiaminRen avatar Apr 01 '19 07:04 JiaminRen

@JiaminRen I just tried to imitate your train code in face recognition to use SyncSN in my code, but I didn't succeed. I met two problem, frist, rank = int(os.environ['RANK']) and world_size = int(os.environ['WORLD_SIZE']) don't have values, so I added some code os.environ['RANK']=str(0), os.environ['WORLD_SIZE']=str(4), second is dist.broadcast

stillwaterman avatar Apr 01 '19 07:04 stillwaterman

Have you changed any code? Just running the script face_recognition/train.sh will be ok.

JiaminRen avatar Apr 01 '19 07:04 JiaminRen

@JiaminRen I quickly test the face_recognition train.py, unfortunately I met the same problems. I think maybe some system configurations I missed.

stillwaterman avatar Apr 01 '19 08:04 stillwaterman

@JiaminRen my system is ubuntu18.04 and I use ananconda to install pytorch, program is stuck at dist.broadcast

stillwaterman avatar Apr 01 '19 08:04 stillwaterman

This is a distributed framework, and it should be run on multi-gpus by using torch.distributed.launch.

JiaminRen avatar Apr 01 '19 08:04 JiaminRen

Thanks, torch.distributed.launch can solve problems. But sync way consumes a lot of GPU memory, always out of memory

stillwaterman avatar Apr 01 '19 09:04 stillwaterman

Sorry to bother you again. Actually, when I was using SyncSN, I got some different errors. I tried to imitate the way you used in train.py, but my model outputs NaNs, which will not happened in SN. Another error is subprocess.CalledProcessError: Command returned non-zero exit 1. Do you have any idea? Thanks

stillwaterman avatar Jul 05 '19 03:07 stillwaterman