MobileSAM icon indicating copy to clipboard operation
MobileSAM copied to clipboard

MobileSAM-v2 occupied ~18GB of GPU

Open pvtoan opened this issue 2 years ago • 0 comments

Hi,

Thank you for your contributions!

I run "Inference.py" and all pre-trained models from MobileSAM-v2, loaded to my GPU, occupied around 18GB.

Therefore, I checked all pre-trained models, loaded to my GPU, including Prompt_guided_Mask_Decoder.pt (16.3MB) l2.pt (246MB) ObjectAwareModel.pt (141MB)

  1. Am I missed other pre-trained models, that should be loaded too?

  2. If there is no more pre-trained models, except the 3 above, can you please explain why it takes a lot of VRAM from GPU?

pvtoan avatar Feb 27 '24 09:02 pvtoan