Stream as Parameter in Tab Autocomplete
Validations
- [X] I believe this is a way to improve. I'll try to join the Continue Discord for questions
- [X] I'm not able to find an open issue that requests the same enhancement
Problem
When using for example ollama or openai as provider for tab autocomplete, it sends "stream"=true per default. The issue is that the corresponding architecture in the background does not always permit to have a stream connection - in that case continue tab autocomplete does not work at all.
Solution
This is why a simple parameter "stream" in the config would be useful so that it can be set manually to false.
@ruizcrp thanks for bringing this up, it sounds pretty critical, and we can definitely add the option (though without streaming responses may come in quite a bit slower).
What is the model that doesn't currently support streaming?
thanks for this awesome extension! i would add as a requirement where wherein ollama/llama.cpp sit behind a proxy and so is it possible that "stream" be toggle/option in both config and chat box level ?
Thanks!
Hi @sestinj , thank you for your quick reply! What @aiseei just wrote is more or less what I meant with "corresponding architecture". I meant the system architecture in the sense that there could be a proxy or something similar. The API is behind that proxy and streaming then does not work in some cases.
I would find this addition beneficial as well. To add a concrete example: I'm using llama.cpp server behind a LiteLLM proxy, and setting stream: true causes a response from llama.cpp that triggers a crash in the LiteLLM proxy.
Furthermore, I'm not sure streaming a response for autocomplete is particularly useful in the first place. Aren't any completions going to be short enough to not matter whether it is streamed or not?
This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.
This issue was closed because it wasn't updated for 10 days after being marked stale. If it's still important, please reopen + comment and we'll gladly take another look!
Hi @sestinj, are there any updates on steam:false option support in autocompletion?
Im using my own openai-like proxy and it doesn't support streaming because of the Kafka (used to send requests to llm), so its hard to use streaming