GPT5 not working with ResponseStreaming
Describe the bug
When setting .gtp5 in the query for response streaming I get a 400.
Version 0.4.6
Setup
let query = CreateModelResponseQuery(
input: .textInput("Tell me a joke"),
model: .gpt5,
stream: true
)
let stream = client.responses.createResponseStreaming(query: query)
Error Headers
{
"Alt-Svc" = (
"h3=\":443\"; ma=86400"
);
"CF-RAY" = (
"xxxxxxxxxxx-SJC"
);
Connection = (
"keep-alive"
);
"Content-Length" = (
191
);
"Content-Type" = (
"application/json"
);
Date = (
"Wed, 13 Aug 2025 22:27:17 GMT"
);
Server = (
cloudflare
);
"Set-Cookie" = (
"_cfuvid=xxxxxxxxx_xxxxxxxx-1755124037886-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None"
);
"Strict-Transport-Security" = (
"max-age=31536000; includeSubDomains; preload"
);
"X-Content-Type-Options" = (
nosniff
);
"cf-cache-status" = (
DYNAMIC
);
"openai-organization" = (
"user-xxxxxxxxx"
);
"openai-processing-ms" = (
25
);
"openai-project" = (
"proj_xxxxxxxxx"
);
"openai-version" = (
"2020-10-01"
);
"x-envoy-upstream-service-time" = (
29
);
"x-request-id" = (
"req_xxxxxxxxx"
);
}
Smartphone:
- Device: iPhone 13 mini, and iOS Simulator
- OS: iOS18.6.1
- Version 0.4.6
[UPDATE] Done some investigation and when running the query using CURL I get the error:
Your organization must be verified to stream this model. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_TOKEN>" \
-d '{
"model": "gpt-5",
"input": "Write a haiku about code.",
"stream" : true,
"reasoning": { "effort": "low" }
}'
I did verified my organization (personal account), but while this fixed the CURL call, I still see the same error using the library.
[UPDATE] The remaining problem was that GTP5 does not accept temperature as input. However I could only see the issue by using a tool like Charles or ProxyMan, and inspect the response body:
{
"error": {
"message": "Unsupported parameter: 'temperature' is not supported with this model.",
"type": "invalid_request_error",
"param": "temperature",
"code": null
}
}
I guess this should be the error passed to ResponseEndpoint.createResponseStreaming
It would be great if we improve the error API.