Chenhongyi Yang
Chenhongyi Yang
I think it's typo and should be 0.5, which is the ignore threshold.
Hi, this problem is caused by the APEX library. We have recently updated the whole repository, and you do not need APEX any more.
Hi, please try [train_visdrone.py](https://github.com/ChenhongyiYang/QueryDet-PyTorch/blob/main/train_visdrone.py) for VisDrone experiments.
Hi, we have recently updated a new VisDrone training config (gradient clip added), which can avoid NaN now.
Hi, we have recently updated the whole repository to support newer versions of PyTorch, Detectron2, and Spconv. Now, you can set up your environment by running the sample setup script...
Hi, is this the baseline RetinaNet model? We used the default setting for RetinaNet provided by Detectron2 and the final AP should be around 37.3. We recommend you first try...
Hi, thank you for being interested in our paper. We do not really need a spatial attention mask for regression because the regression loss is only applied to foreground areas....
Hi. Yes, for each GT, the TopK is computed from all feature points across all feature levels, although in most cases those topk points will generated by the same feature...
Hi, may I ask if you are using the latest code?
Hey, you can set you dataset path following https://github.com/ChenhongyiYang/QueryDet-PyTorch/blob/afc2a9f5aa89a6b4dd2bb6f5d2da9bab1dc46f6b/configs/custom_config.py#L73. For COCO, please refer to the detectron2 documents for dataset setting.