[FEATURE] Perplexity and Groq LLM support
Please add support for two new LLM providers
- Perplexity : https://docs.perplexity.ai/reference/post_chat_completions
- Groq: https://console.groq.com/docs/quickstart
Groq PR
Welcome any contribution to Perplexity!
You can try Perplexity now with an OpenAI Custom module.
- Add a ChatOpenAI Custom Module.
- Setup the Credential with your Perplexity API Key.
- Enter a model, ex. 'sonar-medium-online', https://docs.perplexity.ai/docs/rate-limits
- Add Additional Parameters, basepath, https://api.perplexity.ai
- Requires a Frequency Penalty also, ex .9
Not all Agents/Chains that support OpenAI appear to work fully. But a simple LLM Chain is functional.
Also works (somewhat) with 1 May '24 new models: llama-3-sonar-large-32k-online & llama-3-sonar-small-32k-online
Is this issue open? Can i take it?
Is this issue open? Can i take it?
Did you take it? Can I take it? @cooldude6000 @HenryHengZJ
I tried accomplishing this in flowise, but it doesn't work.
You can try Perplexity now with an OpenAI Custom module.
- Add a ChatOpenAI Custom Module.
- Setup the Credential with your Perplexity API Key.
- Enter a model, ex. 'sonar-medium-online', https://docs.perplexity.ai/docs/rate-limits
- Add Additional Parameters, basepath, https://api.perplexity.ai
- Requires a Frequency Penalty also, ex .9
Not all Agents/Chains that support OpenAI appear to work fully. But a simple LLM Chain is functional.
I tried this in flowise as you can see in the screenshots, but it didn't work. Also, I cut and pasted the perplexity model from the page as shown in the second screenshot. I'm confused how using the ChatOpenAI Custom module can be used to store a perplexity api key? My last confusion from @njfio is 'Requires a Frequency Penalty also, ex .9'. How is this implemented?
Is this issue still open, I would like to contribute to this one.
