CLIP icon indicating copy to clipboard operation
CLIP copied to clipboard

having trouble when doing batch inference

Open WilliamHoo opened this issue 4 years ago • 5 comments

Hi CLIP authors,

so i was trying to run the clip using the code below, but when i doing this in batch, i am having trouble with model.encode_text method, as it will show error message below Screen Shot 2021-11-07 at 12 49 07 PM

my only difference with your sample code below is that, my text has a shape of (batch_size, num_class, n_ctx). My code will run perfectly is my input is of shape (num_class, n_ctx), but i was trying to speed up the process by doing it in batch. Any advice or help will be really appreciated!

import torch
import clip
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    
    logits_per_image, logits_per_text = model(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)  # prints: [[0.9927937  0.00421068 0.00299572]]

WilliamHoo avatar Nov 07 '21 20:11 WilliamHoo

Can you try reshaping the array to (batch_size * num_class, n_ctx) and feed it to the model?

jongwook avatar Nov 16 '21 01:11 jongwook

for batch inference, you can use https://github.com/jina-ai/clip-as-service/

hanxiao avatar Apr 10 '22 19:04 hanxiao

Did you find a solution?

Angtrim avatar Feb 28 '23 09:02 Angtrim

The best working solution is to torch.stack to do batch inference.

imgs = [preprocess(img) for img in imgs]

logits_per_image, logits_per_text = model(
        torch.stack(imgs).to(device), tokenized_text
    )

ahmadmustafaanis avatar Apr 19 '23 10:04 ahmadmustafaanis