Merlin von Trott

Results 27 comments of Merlin von Trott

I think you might just need to put the model in --model "anthropic/claude-3-haiku". It works for me on ubuntu. ![Screenshot from 2024-05-10 07-39-58](https://github.com/OpenInterpreter/open-interpreter/assets/33913822/e155346d-81cc-432c-b2db-ca391732f1d9)

Also with openrouter ![Screenshot from 2024-05-10 07-43-11](https://github.com/OpenInterpreter/open-interpreter/assets/33913822/363c0b57-be08-49a5-a2af-485e45ce8965)

Here is the documentation on OpenRouter about multi-modal models. https://openrouter.ai/docs#images-_-multimodal-requests

Sorry my mistake ... litellm already implements this. I should have just omitted the api_base: https://openrouter.ai/api/v1/chat/completions

Open Router uses Litellm to serve the models, if that would be implemented, it would give us the ability, to use all kinds of models Local/Cloud Providers (Openai, Anthropic, Together,...

The script does not add new models right? It only updates existing ones ? It would be really cool if new models would be added automatically. One of the great...

Cool, thanks for the change! :) could you add: "input_cost_per_image" (model["pricing"]["image"]) and "supports_vision" (The model does support it if ["architecture"]["modality"] == "multimodal") I am mostly using the multimodal models through...

> What if openrouter.ai has models that litellm does not support yet? Should I still add the missing models to the json file? Should be fine right? I don't think...

would you mind just adding the input_cost_per_image and supports_vision ? I think then we can ping someone to have a look at it. I really want to use some of...

Thanks so much :) looking forward to the new models on openrouter. In particular Gemini 1.5 flash and GPT4o.