VL-PLM icon indicating copy to clipboard operation
VL-PLM copied to clipboard

Exploiting unlabeled data with vision and language models for object detection, ECCV 2022

Results 15 VL-PLM issues
Sort by recently updated
recently updated
newest added

If I set num-gpu to 4 and try to train Mask R-CNN with Large Scale Jitter, I'm going to say, _pickle.UnpicklingError: pickle data was truncated An error occurs. How can...

Hi, I often see 'Giraffe' or other wild animals appear around ~70% when inferencing on images. I would just like to confirm I am using the correct class set (COCO_CATEGORIES...

Hi thanks for providing such a nice work! I have a question about the categories that you've used for PL generation. It seems that you used only novel COCO categories...

**Issue Description:** I encountered an issue while trying to download the pre-trained weights—the download link seems to be broken. **Request:** Could you please provide a new link for downloading the...

我通过实验发现CLIP对目标检测的裁剪图像进行判别效果并不好,CLIP更加关注粗粒度的目标,例如他会将一个吃热狗的人给予热狗更大的概率,而实际标签是人。请问你有遇到这种问题吗?有什么解决方法吗?

Where are the code and model for the semi-supervised experiment?

您好! 我看了您的代码,想请问您下面两个问题: 1. 请问您是如何保存效果最优的那一轮权重的?代码里好像只保存了最后一轮的权重,但似乎有时候前面的权重会比后面的权重测试效果更好。 2. 请问为何您的图像融合部分不放在训练代码中,直接端到端的输出,而要放在测试代码里,有什么理由么? 期待您的回复。

Your work is outstanding, how can I train my own dataset, which includes categories outside the COCO 80 categories.

During training: Traceback (most recent call last): File "train_net.py", line 246, in args=(args,), File "/media/cheng/dataset4/annaconda3/envs/clip/lib/python3.7/site-packages/detectron2-RegionCLIP-py3.7-linux-x86_64.egg/detectron2/engine/launch.py", line 82, in launch main_func(*args) File "train_net.py", line 203, in main model = build_model(cfg) File...