Guangda Ji
Guangda Ji
新年快乐! 万事如意!
Hi, I would like to ask if there is any updates on the joint training config. Thanks!
The current pipeline needs camera's pose, intrinsic, depth and RGB.
I think this function will give you the correct id [here](https://github.com/cvg/LabelMaker/blob/ba56b548156d76d899fdd715fdcb0d4a50f03058/labelmaker/label_data.py#L1395)
Check [here](https://github.com/cvg/LabelMaker/blob/6dd6d72059d42efc7bab93d2526083753a8d6daa/scripts/utils_3d.py#L85) it is the inverse of extrinsic
I think you have to set the path of conda (I can't remember it well, maybe /miniconda3/bin/conda) into environment path. An example wit here: https://github.com/cvg/LabelMaker/blob/b3397d0be8897c8fcf7bf83b46c26e7d5f9bfe9e/pipeline/activate_labelmaker.sh#L1
I remember in the pre-processing code of pointcept, they use some sort of sampling as data augmentation. Therefore, the evaluation score during the training is noisy. Please check in detail...
I am sorry I cannot help you with this as the code of PTv3 and PPT are contributed from the original authors.