Andres Milioto
Andres Milioto
Hi, The benchmark for the ZED depth is mostly for benchmarking the depth using the jetsons for our own record, so that we can quickly see what the best configuration...
I recommend having a look at [this paper](https://arxiv.org/pdf/1808.03833.pdf). The early fusion approach I told you is definitely suboptimal. But it's a start. As soon as you have that working you...
Hi, Have you made the model directory deploy ready? Check the instructions [here](https://github.com/PRBonn/bonnetal/tree/master/train/tasks/segmentation#make-inference-model). After that you should have a .pytorch and a .onnx model files in the pretrained directory! Let...
Hi, We've seen this problems before, and usually it was a problem with a too high number of workers. The only other thing that may be going on is that...
Hi, The whole reason for doing it in monochrome is that I need a way to parse the labels that is more or less standard for all datasets, since the...
I don't have a 1080Ti to try it out right now, or a windows machine, but 10fps sounds a bit low to me. You should give the tensorrt model a...
Hi, Thanks for the props :) Is it peak gpu consumption at the beginning or all the time during inference? Which model are you using?
Hi, It may be that at the beginning of the inference cudnn is trying lots of different strategies and some of them use a lot of memory. Another possibility (which...
Hello, I am not having any problems running the hello world. Can you check that you are using the proper docker version? You should be using nvidia-docker [link](https://github.com/NVIDIA/nvidia-docker), not the...
I'm glad to hear that! There are sometimes some caveats for each architecture, which I try to minimize, but they escape. The `/usr/local/cuda/lib64/stubs/libcuda.so.1` thing should definitely not be happening, so...