Christian Reiser
Christian Reiser
This effectively deactivates dropout when using multiple GPUs. State variables that are modified in the forward pass need to be registered as buffer. Here is the fix: ``` class BasicBlock(nn.Module):...
Yes, definitely. We should give it a try and also use half precision. Real-time rendering really sets your technique apart from other recent methods
@ichsan2895 Thanks for this list. I had not figured out yet that you need ceres-solver 2.1.0 and hloc 1.4, which is not documented anywhere yet. Also: For COLMAP 3.8 you...
Hi, this is probably again due to an insufficient number of registers as it was the case in #1. Can you try to decrease the number of threads by e.g....
Thanks for your interest in our work. We did not implement Multi-GPU training, but it should be possible as usual.
Hi! Yeah, we experienced the same problem on an RTX 2080 Ti, which is probably quite close to your RTX Titan. Despite these cards being newer than the GTX 1080...
An easy fix should be to run the network_eval kernel with fewer threads per block, e.g. 512 instead of 640, but then the performance suffers. It should also be possible...
I fixed the problem and it now runs on a RTX 2080 Ti, so it should also for you. Despite the suboptimal fix, I measured on the Lego scene 17...
@bruinxiong Yeah, that does not make sense with a GTX 1080 Ti. Did you use the pre-compiled CUDA extension or did you compile the code yourself? In case you have...
Can you try to run it with the precompiled extension, please? Am Fr., 30. Juli 2021 um 15:08 Uhr schrieb Xiong Lin < ***@***.***>: > @bruinxiong Yeah, that does not...