Bo Jiang
Bo Jiang
We have not seen such work at present, but you can check future works that cite VAD to see if they have conducted experiments on other datasets. [link](https://scholar.google.com/scholar?hl=zh-CN&as_sdt=2005&sciodt=0,5&cites=2354835039696083600&scipsc=)
VAD has three MLP-based decoder heads corresponding to different driving commands to predict planning trajectories. In the training phase, the driving command is used as a mask to train the...
In fact, I think the navigation information ('go straight,' 'turn left,' 'turn right') used by VAD (as well as previous methods) is quite simplified. Navigation information at this level should...
> > Navigation information at this level should be easily obtainable by the navigation system > > VAD is using accurate ego future pose (fined-grained waypoint, which is why VAD...
VAD under CARLA environment is still developing. Currently, it is not prepared to be released, but we will make it open source as soon as possible.
Could you describe the model and the config file you use, as well as the evaluation command, so that I can help you find the problem?
Hi Burhan, Sorry for the late reply. 1. We trained VAD on Carla and then reported the evaluation results. 2. The closed-loop scripts will be open-sourced along with VADv2 as...
Please refer to this [issue](https://github.com/hustvl/VAD/issues/33).
We haven't encountered this error when performing visualization. It seems that the shape of the two inputs (`self.center` and `fut_coord`)is not correct in this line: ``` fut_coord = np.concatenate((self.center[np.newaxis, :2],...
VAD **does not** take ego status information as input by default, but we provide results that use ego status information for ablation.