Add New AI Models via Groq
This PR introduces several new AI models to the project, enhancing its functionality and providing additional options for AI service users.
Changes
-
New AI Models Added in
AiModelEnum:- Meta
- Llama31_70b
- Llama32_11b_TextPreview
- Llama31_8b_Instant
- Llama32_3b_Preview
- LlamaGuard38b
- Llama32_90b_TextPreview
- Llama32_1b_Preview
- Llama32_11b_VisionPreview
- Google
- Gemma2_9b
- Gemma7b
- Meta
-
Updated
modelsObject with Configuration and Pricing Details:- Added maxTokens, contextWindow, costInput, costOutput, and middlewareDeploymentName for each model.
-
Documentation and Comments:
- Updated documentation and inline comments to reflect changes and provide clear usage guidelines for new models and functions.
@Anajrim01 is attempting to deploy a commit to the ShipBit Team on Vercel.
A member of the Team first needs to authorize it.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| slickgpt | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Oct 13, 2024 4:40pm |
@Anajrim01 I still want to implement the /models endpoints of our providers at some point so that we won't have to update the models manually anymore but I'll merge your PR as temporary solution when it's ready. Thanks!
@Anajrim01 I still want to implement the /models endpoints of our providers at some point so that we won't have to update the models manually anymore but I'll merge your PR as temporary solution when it's ready. Thanks!
Yep, I was looking into it, and agree that /models endpoint should be used. This PR I added for the file uploads as the 8k tokens limit of Llama3 is too small for files to be parsed correctly and then ask questions about files.
Max/context tokens incorrect. Will need to be fixed. Don't merge yet.