monosdf
monosdf copied to clipboard
Does not use distributed training
Hello author, I would like to ask if I do not use distributed training and only use one GPU for training, will it affect the results?
Hi, we use a single gpu for training for all the experiments in the paper except the tanks and temples dataset with high resolution cues (C.3 in the paper). But in general I think using large batch size (more gpus) will give you better results. In simple scene like dtu or replica, the difference will not be significant.
Thank you for your reply!