Support OPEN AI spec across providers and models
Thank you for such an excellent library. I wanted to report some tests and discussions arising from https://discord.com/channels/1143393887742861333/1194248563199311942
The gateway currently does not support the latest format in the Open AI spec for other providers not OpenAI.
https://github.com/openai/openai-openapi/blob/master/openapi.yaml#L5421-L5429
The spec shows a message can have multiple types:
ChatCompletionRequestMessage:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestSystemMessage"
- $ref: "#/components/schemas/ChatCompletionRequestUserMessage"
- $ref: "#/components/schemas/ChatCompletionRequestAssistantMessage"
- $ref: "#/components/schemas/ChatCompletionRequestToolMessage"
- $ref: "#/components/schemas/ChatCompletionRequestFunctionMessage"
The content inside types like the user message can be strings or arrays, or message content types.
ChatCompletionRequestUserMessage:
type: object
title: User message
properties:
content:
description: |
The contents of the user message.
oneOf:
- type: string
description: The text contents of the message.
title: Text content
- type: array
description: An array of content parts with a defined type, each can be of type `text` or `image_url` when passing in images. You can pass multiple images by adding multiple `image_url` content parts. Image input is only supported when using the `gpt-4-visual-preview` model.
title: Array of content parts
items:
$ref: "#/components/schemas/ChatCompletionRequestMessageContentPart"
minItems: 1
x-oaiExpandable: true
role:
type: string
enum: ["user"]
description: The role of the messages author, in this case `user`.
name:
type: string
description: An optional name for the participant. Provides the model information to differentiate between participants of the same role.
required:
- content
- role
The following request:
{
"messages": [
{
"content": [
{
"text": " ...",
"type": "text"
}
],
"role": "user"
}
],
"model": "mistralai/Mistral-7B-Instruct-v0.1",
"max_tokens": 500,
"temperature": 0.4,
"user": "user"
}
Fails for providers anyscale and together-ai with:
Anyscale
{"error":{"param":null,"code":null},"provider":"anyscale"}
Together AI
{"error":{"message":"(unknown path)\n Error: Unable to call `content[\"trim\"]`, which is undefined or falsey","type":null,"param":null,"code":null},"provider":"together-ai"}
The expectation is that all models can talk based on the openai.yaml spec, including support for functions, tools, different content types for messages, etc., across all providers.
Thank you!
It's my expectation as well, based on the documentation's promise:
Portkey API is powered by its battle-tested open-source AI Gateway, which converts all incoming requests to the OpenAI signature and returns OpenAI-compliant responses.
But lots of tests and then source code reading seems to show that it's not actually the case and to use what is called Universal-API, we have to use portkey-sdk instead, which is not exactly the expectation that I had.
Since the original request is from a few months ago and things didn't change much, it might make sense to update the docs to make it more explicit - OpenAI SDK works only for OpenAI, for the rest services / providers, etc, Portkey SDK has to be used.
+1
@raulraja @alexander-potemkin @tshu-w Valid points, Some of the contributors have not added support for message arrays while integrating new providers. We're enforcing the requirement in new providers and will add the support where it is missed (should not be more than a few)
I tried to do this change to make the request spec compliant (https://github.com/Portkey-AI/gateway/pull/460), but realised strictly typing the request object was adding overhead maintenence (imagine a case where there is a new role in anthropic without 1:1 mapping in openai)
In the meantime, if you have any suggestions, or changes for the same, I'd be happy to review changes.