MobileSAM
MobileSAM copied to clipboard
MobileSAM-v2 occupied ~18GB of GPU
Hi,
Thank you for your contributions!
I run "Inference.py" and all pre-trained models from MobileSAM-v2, loaded to my GPU, occupied around 18GB.
Therefore, I checked all pre-trained models, loaded to my GPU, including Prompt_guided_Mask_Decoder.pt (16.3MB) l2.pt (246MB) ObjectAwareModel.pt (141MB)
-
Am I missed other pre-trained models, that should be loaded too?
-
If there is no more pre-trained models, except the 3 above, can you please explain why it takes a lot of VRAM from GPU?