01
01 copied to clipboard
Add support for Groq LLM inference
All of the OpenAI products are paid and Ollama can be slow so please try to provide integration for Groq API to run Mixtral.
have you tried setting GROQ_API_KEY locally and leaving it out of the arguments?