hturki
hturki
@ialhashim - did you get 346 or 363 as described in the ticket you linked to? I'm pretty sure that I'm compiling against 70 (the V100 arch). Also things seem...
I still run into this issue unfortunately
@Tom94 I ran into this issue as well and I just checked out the latest commit on master and tested this with the fox dataset on a V100. When I...
Just to confirm, does this mean that we can't use functorch to compute jacobians for anything that relies on a custom backward function? Pretty unfortunate considering the state of https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html...
Thanks for the update. I'd actually like to hook up a third-party library (tiny-cuda-nn: https://github.com/NVlabs/tiny-cuda-nn/blob/master/bindings/torch/tinycudann/modules.py#L41) which has pytorch bindings. It just currently happens to integrate with torch via the current...
I think it's currently listed on the readme: https://drive.google.com/file/d/1SCJf2JJmyCbxpDuy4njFaDw7xPqurpaQ/view?usp=sharing
I've also tried this with a 16x16 matrix instead of 8x8 - sad times still occur.
One more thing for folks to try in conjunction with ```QT_QPA_PLATFORM=offscreen```: ``` export DISPLAY=:0 ``` This allowed me to use the conda colmap in a headless environment.
@jb-ye hopefully the latest update addresses your PR feedback!
I *think* the idea is that depending on where you're getting your semantic ground truth from, it might not be multi-view consistent, and you don't want those inconsistencies to negatively...