RoyAmoyal
RoyAmoyal
Hey. how can I use your implementation of a video? thanks!
How can I contribute an implementation of a new algorithm to the opencv? Tell me if I'm right. 1. fork this repository (should I fork the main repository of opencv...
Hey, I want to use your model on real live streaming with the YCB dataset. I have already managed to run it live on my Intel RealSense Camera. Just to...
hey, I am using your pre-trained weights for the YCB dataset on my custom camera(Realsense) stream/pictures but I am getting bad results. I have changed the K intrinsics and the...
Hey, My university is using Slurm version 21.08.1 and I am trying to run ConvNext with the run_with_submitit.py given in the ConvNext repository. https://github.com/facebookresearch/ConvNeXt/blob/main/run_with_submitit.py how can I fix that? thanks!
Hey, I am trying to use ``` v = pptk.viewer(points) for i in range (100): points = updates_points(points) v.clear() v.load(points) ... ``` but I am getting the next error ```...
Hello, I'm unsure where in the code I can access the 3D points vector. I want to obtain access to it while Colmap continues to add new 3D points. Additionally,...
How can I get the real depth if I know the camera parameters?
How can I render and visualize all the images (or some sample) training + eval during the training including Backscattering, J depth etc in subsea-nerf? (Seathru nerf) I want to...
Hey, Let's say I got a lidar/rangefinder with my camera, can I use it in the Monocular (without the IMU) for the scale (pixel to m technique)? (for example for...