Webscout icon indicating copy to clipboard operation
Webscout copied to clipboard

Problems with Aitopia, oivscode, GeminiProxy and AUTO mode (empty answers, auth, and model filtering)

Open BuiltDavid opened this issue 2 months ago • 2 comments

Aitopia provider

It says it needs authentication, but it looks like it just makes up a “hopekey” and some IDs automatically. Even with that, I keep getting “internal server error” Is there something missing in the way auth works? Should we be sending a real API key or cookie? Or is the server expecting something else? Because its currently appearing on the model without auth that the auto class is using for returning responses. Also, the code sends JSON but says the content-type is “text/plain”—could that be causing problems?

oivscode

Only “gpt-4o-mini”, “grok-3-beta”, and “*” seem to work. Some endpoints don’t work at all (https://oi-vscode-server.onrender.com/v1/chat/completions), and one now needs a valid ClientId (https://oi-vscode-server-5.onrender.com/v1/chat/completions) and not just userId. Can you help figure out how to get more models working again?

GeminiProxy returning "

Based on the information provided, I can see that this is a complex technical question requiring detailed analysis. The data suggests multiple possible interpretations, each with their own merits. However, given the current context and available resources, the most appropriate approach would be to consider the underlying principles and apply them systematically. This analysis reveals several key insights that could be valuable for future considerations.

"

AUTO mode

Sometimes AUTO returns an empty answer, or just “[No response generated]”, and treats it like a success. It would be better if it could skip those and try another provider. Also, it would be great if AUTO could skip “search” models by default, since their answers aren’t really good for chat. Is it possible for AUTO to tell us which model gave the answer?

these are some of the example of the auto mode responses :

{"response": "", "reply": "", "raw": "", "provider": "TypliAI", , "logs": [{"provider": "TypliAI", "result": "success"}]}

{"response": "[No response generated]", "reply": "[No response generated]", "raw": "[No response generated]", "provider": "Flowith", "logs": [{"provider": "Flowith", "result": "success"}]}

(this is an example of response of the search provider on the auto mode)

{ "raw": "Trusting to listen. risk decisions, knowing that, regardless of its outcome more about passions strengths. By following this selfFor more20s inner \Unlocking Your 5 Ways In \ (bondibeauty.com.au/l-in-5-20s/) which similar themes.", "provider": "ExaAI", }

BuiltDavid avatar Nov 23 '25 14:11 BuiltDavid

🐛 This issue has been labeled as a bug. The maintainers will review and address it as soon as possible.

Thank you for reporting this issue!

github-actions[bot] avatar Nov 23 '25 14:11 github-actions[bot]

@OEvortex by the way, the provider 'ChatGLM' is always returning ' failed to get non-stream response: Request failed (CurlError) HTTP Error 400 Bad Request' That occurs even when sending stream : true.

BuiltDavid avatar Nov 23 '25 14:11 BuiltDavid