Wang Zhaohui

Results 10 issues of Wang Zhaohui

ch5 运行 **pcd_to_bird_eye.cc** 等代码时,读取PCD文件都会出现**Failed to find match for field 'intensity'.** 有人和我一样吗。

I wonder how "-d" parser is created?Because I find "-s" in original 3DGS code but in Scaffold-GS there is "-d" instead .And then I find : ` parser = ArgumentParser(description="Training...

Thanks for your contribution!I am running persformer on Apollo Dataset and I find in their code ,cam_intrinsics are set to : array([[2.015e+03, 0.000e+00, 9.600e+02], [0.000e+00, 2.015e+03, 5.400e+02], [0.000e+00, 0.000e+00, 1.000e+00]])...

Thanks for your great work!But I have a little question about the visualization: I use: **python tools/video_demo/bev.py ./projects/configs/tracking/petr/f3_q5_800x320.py --result ./work_dir/f3_all/track_no_ext/results_nusc_tracking.json --show-dir ./work_dir/visualizations/** **(sparse4d) (base) wzh@wzh-pc:~/study/github/3Dobjectdetection/PF-Track$ python tools/video_demo/bev.py ./projects/configs/tracking/petr/f3_q5_800x320.py --result ./work_dir/f3_all/track_no_ext/results_nusc_tracking.json...

Thanks for your great work!But I have a small problem with the implementation. When I load the dataset,I find: Traceback (most recent call last): File "tools/train.py", line 260, in main()...

Thank you for your work! But when I train like: bash tools/dist_train.sh configs/openlane/anchor3dlane_mf_iter.py 8 --auto-resume I found: /home/zhaohui1.wang/github/Anchor3DLane/data/OpenLane/prev_data_release/training/segment-7850521592343484282_4576_090_4596_090_with_camera_labels/152090848289187000.pkl but I have download prev_data_release.tar, is this due to waymo version?

非常感谢你们研究组关于3D 车道线检测方面的研究工作!但是,我最近在复现相关代码时遇到了一些问题。 我想要直接复用你们的可视化代码来进行openlane v1.2的GT可视化,但是发现GT的结果明显有着非常大的偏差。 现在想请教是否应为最新的openlane数据集的GT标注和persformer的可视化代码具有冲突呢? 期待您的回复

非常感谢你们研究组关于3D 车道线检测方面的研究工作!但是,我最近在复现相关代码时遇到了一些问题。 我想要直接复用你们的可视化代码来进行openlane v1.2的GT可视化,但是发现GT的结果明显有着非常大的偏差。 现在想请教是否应为最新的openlane数据集的GT标注和anchor3dlane的可视化代码具有冲突呢? 期待您的回复

Hello, thanks for your great work for BeV perception, but I have a little question: I find points used in article:PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images,however,...

Thanks for your great work! But I have a small question: I want use Depth Map but found it is Colorful. I wonder how to transfer it to grayscale?