Merlin von Trott
Merlin von Trott
I think you might just need to put the model in --model "anthropic/claude-3-haiku". It works for me on ubuntu. 
Also with openrouter 
Here is the documentation on OpenRouter about multi-modal models. https://openrouter.ai/docs#images-_-multimodal-requests
Sorry my mistake ... litellm already implements this. I should have just omitted the api_base: https://openrouter.ai/api/v1/chat/completions
Open Router uses Litellm to serve the models, if that would be implemented, it would give us the ability, to use all kinds of models Local/Cloud Providers (Openai, Anthropic, Together,...
The script does not add new models right? It only updates existing ones ? It would be really cool if new models would be added automatically. One of the great...
Cool, thanks for the change! :) could you add: "input_cost_per_image" (model["pricing"]["image"]) and "supports_vision" (The model does support it if ["architecture"]["modality"] == "multimodal") I am mostly using the multimodal models through...
> What if openrouter.ai has models that litellm does not support yet? Should I still add the missing models to the json file? Should be fine right? I don't think...
would you mind just adding the input_cost_per_image and supports_vision ? I think then we can ping someone to have a look at it. I really want to use some of...
Thanks so much :) looking forward to the new models on openrouter. In particular Gemini 1.5 flash and GPT4o.