continue icon indicating copy to clipboard operation
continue copied to clipboard

Stream as Parameter in Tab Autocomplete

Open ruizcrp opened this issue 1 year ago • 4 comments

Validations

  • [X] I believe this is a way to improve. I'll try to join the Continue Discord for questions
  • [X] I'm not able to find an open issue that requests the same enhancement

Problem

When using for example ollama or openai as provider for tab autocomplete, it sends "stream"=true per default. The issue is that the corresponding architecture in the background does not always permit to have a stream connection - in that case continue tab autocomplete does not work at all.

Solution

This is why a simple parameter "stream" in the config would be useful so that it can be set manually to false.

ruizcrp avatar May 31 '24 15:05 ruizcrp

@ruizcrp thanks for bringing this up, it sounds pretty critical, and we can definitely add the option (though without streaming responses may come in quite a bit slower).

What is the model that doesn't currently support streaming?

sestinj avatar May 31 '24 18:05 sestinj

thanks for this awesome extension! i would add as a requirement where wherein ollama/llama.cpp sit behind a proxy and so is it possible that "stream" be toggle/option in both config and chat box level ?

Thanks!

commit4ever avatar Jun 01 '24 06:06 commit4ever

Hi @sestinj , thank you for your quick reply! What @aiseei just wrote is more or less what I meant with "corresponding architecture". I meant the system architecture in the sense that there could be a proxy or something similar. The API is behind that proxy and streaming then does not work in some cases.

ruizcrp avatar Jun 02 '24 08:06 ruizcrp

I would find this addition beneficial as well. To add a concrete example: I'm using llama.cpp server behind a LiteLLM proxy, and setting stream: true causes a response from llama.cpp that triggers a crash in the LiteLLM proxy.

Furthermore, I'm not sure streaming a response for autocomplete is particularly useful in the first place. Aren't any completions going to be short enough to not matter whether it is streamed or not?

liffiton avatar Oct 02 '24 16:10 liffiton

This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.

github-actions[bot] avatar Mar 03 '25 04:03 github-actions[bot]

This issue was closed because it wasn't updated for 10 days after being marked stale. If it's still important, please reopen + comment and we'll gladly take another look!

github-actions[bot] avatar Mar 14 '25 02:03 github-actions[bot]

Hi @sestinj, are there any updates on steam:false option support in autocompletion?

Im using my own openai-like proxy and it doesn't support streaming because of the Kafka (used to send requests to llm), so its hard to use streaming

TristeMq avatar May 20 '25 11:05 TristeMq