Ziyi Wu
Ziyi Wu
> Hello, this code can visualize the 3D detection box in the point cloud, may I ask if the 3D detection box can be visualized in the 2D image,and how...
BTW another minor question. When I do rotation conversion between rmat and euler, why it's not reversible? ``` >>> angle = np.array([3.0546, 0.0007, 1.6675]) >>> mat2euler(euler2mat(angle)) array([-3.1325, -0.0865, -1.6679], dtype=float32)...
@cremebrule Thank you so much for the reply, that helps a lot! Yes, I understand there is ambiguity in Euler angle, which solves my second question. Regarding the rotation, assume...
@cremebrule Hi, just FYI, I can also confirm the rotation difference only happens on Panda, not on other robots e.g. Sawyer
Just a workaround, if you want to convert the prediction on voxel points back to original points, you can set return_index and return_inverse to True in the sparse_quantization function [here](https://github.com/NVIDIA/MinkowskiEngine/blob/master/MinkowskiEngine/utils/quantization.py#L130-L131)....
Hi! Actually some of the pre-trained weights provided in this repo is indeed trained on all 13 classes of ShapeNet. For example the 'configs/img/onet_pretrained.yaml', 'configs/pointcloud/onet_pretrained.yaml' and 'configs/voxels/onet_pretrained.yaml'. I have tested...
My workaround for this is to first open a [GCP](https://cloud.google.com/) server and use wget on that server. GCP is not in China so you can get access to the data....
As you can see in this line `for p in self.D.parameters():` this is not performing gradient clipping but params value clipping
Yeah I got weird visualization results using w2c. When using c2w it seems to make more sense. Have you figured this out?
Thanks, I just use the common c2w matrix (which is basically `np.linalg.inv(w2c)`) and that seems to work : )