Add support for Ollama, Palm, Claude-2, Cohere, Replicate Llama2, CodeLlama, Hugging Face (100+LLMs) - using LiteLLM
from litellm import completion
## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages)
# anthropic call
response = completion(model="claude-instant-1", messages=messages)
Addressing: https://github.com/vocodedev/vocode-python/issues/375 https://github.com/vocodedev/vocode-python/issues/8
Can I get a review on this PR @ajar98 @Kian1354 ?
do you want one file called llm.py where all the litellm calls happen ?
Hello. Maybe it's better to create a separate LiteLLM agent with its own config rather than overwriting native OpenAI agent?
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
hi @ishaan-jaff
I think creating a separate LiteLLM agent would be ideal to maintain modularity and ease of integration with different models. If needed, I'm willing to work on this based on the current PR, ensuring a smooth and efficient addition. Thanks for considering this approach, and I appreciate all the efforts in enhancing our project's capabilities.
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This PR has been automatically closed due to inactivity. Thank you for your contributions.
Has Anyone implemented it in LlamaCpp, if so can you share it, if anyone has made progress or has any code to share, it would be greatly appreciated