Wu Yifan
Wu Yifan
 When I execute the following code, an error occurs: `torchpack dist-run -np 1 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox` May I ask how to solve it? Due to memory...
I'm running the fusion model. And I want to change the lidar backbone from voxelnet to pointpillars. I modified the configs, but it didn't work. I hope to receive your...
This is the training code: `torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/lidar-only-det.pth` I want to know what is the **lidar-only-det.pth**. It's been used as a lidar-pretrained...
Hi, thanks for open sourcing this brilliant work! I read your paper in detail. You use ConvNeXt as the pretrained model of PointPillars. This is very pioneering. I want to...
Hello, thanks for open source your great works! I've met problems about depth2normal. I use your model metric3d_vit_giant2 with my image as input to obtain depth and normal. I call...