alexander
alexander
Hi, have you down the work mentioned above, now i want to get the lane vector corresponding to the image view.
thank u for the reply. ---Original--- From: ***@***.***> Date: Sat, Sep 17, 2022 22:13 PM To: ***@***.***>; Cc: ***@***.******@***.***>; Subject: Re: [lyft/nuscenes-devkit] Converting Map to Vector Forms (#94)
> Hello dear blogger, thank you very much for your great work. I modified the corresponding parameters in the bev_stereo_lss_r50_256x704_128x128_24e_2key.py file according to this #146 for get 256x256 bev grid...
> bevformer代码在去年五月左右开源。 作者为什么先采用IPM生成bev特征,然后使用类似于Deter-head检测地图元素,而不是使用bevformer直接检测出地图元素呢,请问是因为效果不好,还是没有尝试过呢? 您好,你尝试过BEVFormer吗?我使用BEVDepth提取BEV特征,网络训练结果并不好,loss始终不下降
hi, i met the same problem, so where is the bug?
我也有一个问题,直接使用voxelnet也能得到一些结果,但是检测 类别 肯定受限于训练集,那这模型能做到 sam 的开集的效果吗?我觉得是不能的,那这个实验还有意义吗?
> That doesn't look normal. i have tried, the mAP is around 0.22. The lidar depth supervision is necessary.
> @Alexanderisgod Have you sovled this? Where is the `depth_gt` supervision, I didn't find that in coding. yes, the depth_gt is in the base_exp.py (loss_depth)
In the ranking of the nuScenes list, the NDS of camera is not as good as that of BEVFormer. Does it mean that it is not feasible to transfer point...
> I'm just not sure whether the structural information uncovered by knowledge transfer in your paper can be learned in the Transformer structure in BEVFormer. Or Transformer can learn structural...