Occupying bandwidth when using remote SSH server
Before submitting your bug report
- [X] I'm not able to find an open issue that reports the same bug
- [X] I believe this is a bug. I'll try to join the Continue Discord for questions
- [X] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- Local OS: win 10 22H2 build 19045.4529
- Remote OS: Rockey 8.10
- Continue: v0.8.43
- IDE: VScode 1.91.1
- config.json:
{
"models": [
{
"model": "Qwen/CodeQwen1.5-7B-Chat",
"title": "CodeQwen",
"apiBase": "http://localhost:4004/v1/",
"completionOptions": {
"temperature": 0.5
},
"provider": "openai",
"apiKey": "EMPTY"
}
],
"customCommands": [
{
"name": "test",
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "openai",
"model": "Qwen/CodeQwen1.5-7B-Chat",
"apiBase": "http://localhost:4004/v1/",
"apiKey": "EMPTY"
},
"tabAutocompleteOptions": {
"useCopyBuffer": false,
"maxPromptTokens": 400,
"prefixPercentage": 0.5,
"multilineCompletions": "always",
"contextLength": 8192
},
"allowAnonymousTelemetry": false,
"disableIndexing": true
}
Description
I've encountered a critical issue when using the Continue.dev extension on a remote SSH server: Problem: After enabling Continue.dev and configuring it in the JSON file, the extension consumes excessive bandwidth, leading to VSCode disconnecting from the remote server.
To reproduce
- Connect to a remote server via SSH in VSCode
- From file/Open Folder menu, open a new folder which is not your home directory.
- Enable the Continue.dev extension
- Add the necessary configuration to the JSON file (in your ~/.continue/config.json)
- Observe bandwidth usage and connection stability
Log output
INFO UNRESPONSIVE extension host: starting to profile NOW
log.ts:429 WARN UNRESPONSIVE extension host: 'continue.continue' took 98.12424056655655% of 4915.525ms, saved PROFILE here: 'file:///c%3A/Users/m276924/AppData/Local/Temp/exthost-9b8ceb.cpuprofile'
log.ts:419 INFO Extension host (LocalProcess pid: 27628) is responsive.
log.ts:419 INFO Extension host (Remote) is responsive.
log.ts:419 INFO Extension host (LocalProcess pid: 27628) is unresponsive.
localProcessExtensionHost.ts:275 Extension Host
localProcessExtensionHost.ts:276 Debugger attached.
log.ts:419 INFO UNRESPONSIVE extension host: starting to profile NOW
log.ts:429 WARN UNRESPONSIVE extension host: 'continue.continue' took 98.83354166982429% of 4955.834ms, saved PROFILE here: 'file:///c%3A/Users/m276924/AppData/Local/Temp/exthost-2b27fc.cpuprofile'
log.ts:419 INFO Extension host (Remote) is unresponsive.
log.ts:419 INFO [remote-connection][Management ][6c74b…][reconnect] received socket timeout event (unacknowledgedMsgCount: 960, timeSinceOldestUnacknowledgedMsg: 30638, timeSinceLastReceivedSomeData: 20001).
log.ts:419 INFO [remote-connection][Management ][6c74b…][reconnect] starting reconnecting loop. You can get more information with the trace log level.
log.ts:419 INFO [remote-connection][Management ][6c74b…][reconnect] resolving connection...
log.ts:419 INFO Invoking resolveAuthority(ssh-remote)...
log.ts:419 INFO [LocalProcess0][resolveAuthority(ssh-remote,2)][0ms] obtaining proxy...
log.ts:419 INFO [LocalProcess0][resolveAuthority(ssh-remote,2)][0ms] invoking...
log.ts:419 INFO [LocalProcess0][resolveAuthority(ssh-remote,2)][1002ms] waiting...
Same fundamental issue as #1705. This was introduced in 0.8.43, if you revert to 0.8.42 for the time being, you should be good until this gets fixed.
Thanks for the +1 on this. We are aware and working on solving it this morning! I'll hopefully have an update for you very soon
Update:
In 0.8.43, no tag change could solve the problem. I assumed changing "disableIndexing": true would reduce the bandwidth usage. Neither false nor true solved the problem.
By switching to 0.8.42, the bandwidth usage got better by 30%, but there was still a lot of disconnection. But I changed "disableIndexing": true again to force Continue to stop the indexing. Finally, everything worked normally.
@magnooj thank you for this extra information, that's very good to know. One other question that comes to mind: is your VS Code workspace the root of a git repository, or is it a subdirectory of a git repository?
I ask because there's some chance that we wouldn't be looking for .gitignore files in the directory above, which could cause too many files reads
@magnooj thank you for this extra information, that's very good to know. One other question that comes to mind: is your VS Code workspace the root of a git repository, or is it a subdirectory of a git repository?
I ask because there's some chance that we wouldn't be looking for .gitignore files in the directory above, which could cause too many files reads
It is a subdirectory of a remote server without any .gitingnore file. But the number of files is not much! Is it possible to limit indexing to a level? for example only 2?
@magnooj So even if you include all of the files that are ignored by git, like a .git folder, any build folders, or anything else, this is still a very small number of total files? Also, a couple of questions I can think of additionally that might help:
- are there any files that are particularly large?
- does anything change if you remove the "folder" context provider from config.json?
mine is a perforce (not git) client - some examples with tons of files, one example with small-medium amount the large client
find . -type f | wc -l
1240363
the smaller client
find . -type f | wc -l
1758
it mostly happened in the larger client
it would be nice if Continue used the same ignore list as the VSCode File Watcher excludes (per remote server in the settings.json) I don't want to have to specify the same list per perforce client since they are ephemeral
Good to know. @magnooj you aren't by chance using perforce too are you?
to be clear - in my examples I'm not actively using Continue at all - I'm just browsing / editing code. So it is a persistent issue that I and a number of others at work are hitting even when not interacting with Continue at all
@sestinj I think I found what is causing the problem: hidden folders! It is a common practice in my company to create local environments for each project by calling conda create -p .env. Also, we don't use git much. Therefore, there is no .gitignore most of the time.
I tried a .gitignore and added hidden folders in it, and "disableIndexing": false:
- in 0.8.43, It finally started to work! The remote connection is alive but slow since it still uses a lot of bandwidth. Large files were also excluded.
- in 0.8.42, it works great! There are some large Excel/CSV files, which also cause instability. After adding them to .gitignore, it has become way better.
Removing the "folder" context provider from config.json didn't help. I am not using perforce either. I work on our dedicated Linux (Rockey 8.10) servers, and my work is to go through each project, activate the local env, and check the codes.
I also updated the config.json file, here is the latest one which works better:
{
"models": [
{
"model": "codestral",
"title": "codestral",
"apiBase": "http://localhost:11434",
"provider": "ollama",
"completionOptions": {
"temperature": 0.5
}
}
],
"customCommands": [
{
"name": "test",
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"tabAutocompleteModel": {
"title": "Tab Autocomplete Model",
"provider": "ollama",
"apiBase": "http://localhost:11434",
"model": "codestral"
},
"tabAutocompleteOptions": {
"useCopyBuffer": false,
"maxPromptTokens": 400,
"prefixPercentage": 0.5,
"multilineCompletions": "always",
"contextLength": 8192,
"useOtherFiles": true,
"debounceDelay": 100
},
"allowAnonymousTelemetry": false,
"embeddingsProvider": {
"provider": "ollama",
"apiBase": "http://localhost:11434",
"model": "nomic-embed-text"
},
"contextProviders": [
{
"name": "codebase",
"params": {
"nRetrieve": 25,
"nFinal": 5,
"useReranking": true
}
}
],
"disableIndexing": false
}
@sestinj A new problem arose with 0.8.42. It kills my kernel when I am using my notebook in the VScode. Removing the embedding by nomic, removing the context provider, and disabling the indexing won't solve it.
Here is the log:
[Continue.continue]Parsing failed
Error: Parsing failed
at _Parser.parse (~\.vscode\extensions\continue.continue-0.8.42-win32-x64\out\extension.js:94928:81)
at _ImportDefinitionsService._getFileInfo (~\.vscode\extensions\continue.continue-0.8.42-win32-x64\out\extension.js:95825:28)
at async PrecalculatedLruCache.initKey (~\.vscode\extensions\continue.continue-0.8.42-win32-x64\out\extension.js:93784:25)
[Extension Host] warn 14:30:29.938: Jupyter Extension: Cancel all remaining cells due to dead kernel
is this still an issue?
This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.
is this still an issue?
Yes, I ran into this same issue just this weekend. vsCode (version below) working over SSH was working fine until I installed the Continue.dev extension (versions below), at which point SSH connections started dropping, retrying, "scanning", etc. I had to disable it and am unable to use it.
- OS: Fedora-41 / Linux
- Continue version: v1.0.3 (release version) and v1.1.7 (pre-release version)
- IDE version: v1.98.0 (vsCode)
- Model: All models (cloud-hosted and ollama-self-hosted)
- config:
{
"models": [
{
"title": "qwen2.5-coder:32b",
"model": "qwen2.5-coder:32b",
"provider": "ollama",
"apiBase": "http://192.168.0.12:11434"
},
{
"title": "qwen2.5:32b-instruct",
"model": "qwen2.5:32b-instruct",
"provider": "ollama",
"apiBase": "http://192.168.0.12:11434"
},
{
"title": "gemma2:27b",
"model": "gemma2:27b",
"provider": "ollama",
"apiBase": "http://192.168.0.12:11434"
}
],
"embeddingsProvider": {
"title": "nomic-embed-text",
"model": "nomic-embed-text",
"provider": "ollama",
"apiBase": "http://192.168.0.12:11434"
},
"tabAutocompleteModel": {
"title": "qwen2.5-coder:32b",
"model": "qwen2.5-coder:32b",
"provider": "ollama",
"apiBase": "http://192.168.0.12:11434"
},
"contextProviders": [
{
"name": "code",
"params": {}
},
{
"name": "docs",
"params": {}
},
{
"name": "diff",
"params": {}
},
{
"name": "terminal",
"params": {}
},
{
"name": "problems",
"params": {}
},
{
"name": "folder",
"params": {}
},
{
"name": "codebase",
"params": {}
}
],
"slashCommands": [
{
"name": "share",
"description": "Export the current chat session to markdown"
},
{
"name": "cmd",
"description": "Generate a shell command"
},
{
"name": "commit",
"description": "Generate a git commit message"
}
],
"data": []
}
Not sure if still active but I'm also seeing instability with SSH with Continue plugin on VS Code (Mac and Linux). Continue is hooked up to Ollama server which is otherwise perfectly solid with OpenWeb UI and local/remote console. Continue works with Ollama and generates valid responses to my prompts. And I have only one app talking to Ollama at a time. I saw the same behavior a year ago when I first played with Continue.
My VS Code is running on Mac or Ubuntu and I remote into Linux VM to edit, debug, etc. In the notification bar, after a few minutes or 1/2 hour, it varies, I see Continue trying to connect and then the ssh connection keeps trying to reload. As soon as I disable Continue and reload VS Code, SSH connection is restored and stable for hours.
Can share my config if needed. Lovely plug in but I can't use because I'm always remoting via VS Code. Thanks for your attention.
This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.
This issue was closed because it wasn't updated for 10 days after being marked stale. If it's still important, please reopen + comment and we'll gladly take another look!