World 3D coordinate to image pixel coordinate
Hi, I'm trying to project a 3d point from sparse point cloud onto the image. Below are my steps. My image is a 360 image so i'm using SPHERE camera model.
Ref : https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/process_data/colmap_utils.py#L390
rotation = qvec2rotmat(im_data.qvec) translation = im_data.tvec.reshape(3, 1) w2c = np.concatenate([rotation, translation], 1)
out = parse_colmap_camera_params(cam_id_to_camera[camera_id]) camera_matrix = np.array([[out["fl_x"], 0, out["cx"]], [0, out["fl_y"], out["cy"]], [0, 0, 1]], np.float32)
p_m = np.matmul(camera_matrix, w2c) p_m = np.concatenate([p_m, np.array([[0, 0, 0, 1]])], 0)
Now I'm trying to multiply the homogenious 3d point with the projection matrix (p_m) to get the pixel coordinates
p_2d = np.matmul(p_m, [x, y, z, 1]) pixel = (p_2d[0]/p_2d[2], p_2d[1]/p_2d[2])
But not getting the expected results. Can someone suggest where it is going wrong.
Thanks
For the conversion, you can check the code of SphericalBundleAdjustmentCostFunction in base/cost_functions.h