How to use model to inference with image and video?
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help me
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command:
from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0')
img = 'demo.jpg' result = inference_detector(model, img)
please help me
Hello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set pretrained=None so that you don't have to download the pre-trained backbone.
After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named checkpoint/.
Image Demo
You can run image_demo.py like this:
CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar
The result will be saved in demo/:

Video Demo
You can run video_demo.py like this:
CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4
Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in demo/: link
Thank you for helping me, it's my pleasure I run your code, but I have three problems include:
- I use NVIDIA 2080Ti 11GB for inference but the program raises Cuda out of memory, can I control memory. I don't need the program inference too fast.
- I use 2 graphics cards NVIDIA 2080Ti 11GB for inference, can I use the program to inference with multi-GPU
- What I need to edit to inference in CPU Please help me, thank you very much
Is it possible to have a collaboratory notebook for this as well? similar to this
Hey I just made one similar to the previous notebook.
TODO
- change dataset downloaded from ADE20K to COCO. if someone could help me identify the correct link to download the images from, that would be great.
- general testing and documentation
Hello! I have run this notebook of detection. But i've got this error about downloading the pretrained model: CalledProcessError: Command 'cd /content/ViT-Adapter/detection mkdir pretrained cd pretrained wget https://conversationhub.blob.core.windows.net/beit-share-public/beit/beit_large_patch16_224_pt22k_ft22k.pth ' returned non-zero exit status 8.
It seems that i cannot reach this link. Could you help to solve this please ?
Hello! I have run this notebook of detection. But i've got this error about downloading the pretrained model: CalledProcessError: Command 'cd /content/ViT-Adapter/detection mkdir pretrained cd pretrained wget https://conversationhub.blob.core.windows.net/beit-share-public/beit/beit_large_patch16_224_pt22k_ft22k.pth ' returned non-zero exit status 8.
It seems that i cannot reach this link. Could you help to solve this please ?
Maybe the authors can help you with this, the link was working at the time of notebook creation. Maybe weights were moved or the link needs to be refreshed
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command: from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0') img = 'demo.jpg' result = inference_detector(model, img) please help me
Hello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set
pretrained=Noneso that you don't have to download the pre-trained backbone.After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named
checkpoint/.Image Demo
You can run
image_demo.pylike this:CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tarThe result will be saved in
demo/:Video Demo
You can run
video_demo.pylike this:CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in
demo/: link
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command: from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0') img = 'demo.jpg' result = inference_detector(model, img) please help me
Hello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set
pretrained=Noneso that you don't have to download the pre-trained backbone.After that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named
checkpoint/.Image Demo
You can run
image_demo.pylike this:CUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tarThe result will be saved in
demo/:Video Demo
You can run
video_demo.pylike this:CUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4Here we take the demo.mp4 provided by mmdetection for example.
The result will be saved in
demo/: link
Hi, I tried to download the pre-trained backbone you have mentioned hereBEiT-L. But it seems that it's invalid now. Could you please provide a new link ? Thanks a lot!
I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command: from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0') img = 'demo.jpg' result = inference_detector(model, img) please help me
Hello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set so that you don't have to download the pre-trained backbone.
pretrained=NoneAfter that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named .checkpoint/Image Demo
You can run like this:
image_demo.pyCUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tarThe result will be saved in :
demo/Video Demo
You can run like this:
video_demo.pyCUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4Here we take the demo.mp4 provided by mmdetection for example. The result will be saved in : link
demo/I have just downloaded model htc++_beit_adapter_large_fpn_3x_coco.pth and config from this github. But I cannot load model use this command: from mmdet.apis import init_detector configFile = 'configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py' checkpointFile = 'checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth' model = init_detector(config_file, checkpoint_file, device='cuda:0') img = 'demo.jpg' result = inference_detector(model, img) please help me
Hello, I just updated image demo and video demo, you can use them according to the following instructions.
Prepare trained models
Before inference a trained model, you should first download the pre-trained backbone, for example, BEiT-L. Or you can edit the config file and set so that you don't have to download the pre-trained backbone.
pretrained=NoneAfter that, you should download the trained checkpoint, for example, ViT-Adapter-L-HTC++. Here, I put this file in a folder named .checkpoint/Image Demo
You can run like this:
image_demo.pyCUDA_VISIBLE_DEVICES=0 python image_demo.py data/coco/val2017/000000226984.jpg configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tarThe result will be saved in :
demo/Video Demo
You can run like this:
video_demo.pyCUDA_VISIBLE_DEVICES=0 python video_demo.py ./demo.mp4 configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py checkpoint/htc++_beit_adapter_large_fpn_3x_coco.pth.tar --out demo/demo.mp4Here we take the demo.mp4 provided by mmdetection for example. The result will be saved in : link
demo/Hi, I tried to download the pre-trained backbone you have mentioned hereBEiT-L. But it seems that it's invalid now. Could you please provide a new link ? Thanks a lot!
You can consider searching for the download link in https://github.com/microsoft/unilm/tree/master/beit. However, it is worth noting that the link he provides cannot be obtained through wget. You should consider entering the link in the browser to obtain the download.
