having trouble when doing batch inference
Hi CLIP authors,
so i was trying to run the clip using the code below, but when i doing this in batch, i am having trouble with model.encode_text method, as it will show error message below

my only difference with your sample code below is that, my text has a shape of (batch_size, num_class, n_ctx). My code will run perfectly is my input is of shape (num_class, n_ctx), but i was trying to speed up the process by doing it in batch. Any advice or help will be really appreciated!
import torch
import clip
from PIL import Image
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)
image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
logits_per_image, logits_per_text = model(image, text)
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]
Can you try reshaping the array to (batch_size * num_class, n_ctx) and feed it to the model?
for batch inference, you can use https://github.com/jina-ai/clip-as-service/
Did you find a solution?
The best working solution is to torch.stack to do batch inference.
imgs = [preprocess(img) for img in imgs]
logits_per_image, logits_per_text = model(
torch.stack(imgs).to(device), tokenized_text
)