tabAutocompleteModel After setting it to openai, a message is displayed indicating 404 Not Found
Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the Continue Discord for questions
- [X] I'm not able to find an open issue that reports the same bug
- [X] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: MacOS 14.5
- Continue: v0.8.43
- IDE: Visual Studio Code v1.9.11
- Model: gpt-4o-mini
- config.json:
{
"models": [
{
"title": "OpenAI",
"provider": "openai",
"model": "gpt-4o-mini",
"apiKey": "xxx",
"apiBase": "https://api.openai.com/v1"
}
],
"tabAutocompleteModel": {
"title": "OpenAI",
"provider": "openai",
"model": "gpt-4o-mini",
"apiKey": "xxx",
"apiBase": "https://api.openai.com/v1/completions"
},
"useLegacyCompletionsEndpoint": false,
"allowAnonymousTelemetry": false,
"embeddingsProvider": {
"provider": "transformers.js"
}
}
Description
Chat can be used normally, but CMD + I and code autocomplete, will prompt 404 Error streaming diff: Error: HTTP 404 Not Found from https://api.openai.com/v1/completions You may need to add pre-paid credits before using the OpenAI API.
Actually, I have pre-paid credits.
To reproduce
No response
Log output
No response
I think this only works in the pre-release version currently.
I've encountered this error too. Only for OpenAI models apparently. If you switch the model (i.e. gemini for example) before clicking apply, it works.
with the pre-release extension i still get the same 404 error if i use AUTODETECT.
"tabAutocompleteModel": {
"model": "AUTODETECT",
"title": "OpenAI",
"apiKey": "sk-proj-[REDACTED]",
"provider": "openai"
},
if im using the pre-release and use gpt-4o-mini for the model it does work though.
so looks like AUTODETECT is still broken.
I tested with both of the following in my "models" array in config.json:
{
"title": "OpenAI",
"model": "AUTODETECT",
"provider": "openai",
"apiKey": "sk-xxx",
"apiBase": "https://api.openai.com/v1"
},
and
{
"title": "OpenAI",
"model": "gpt-4o-mini",
"provider": "openai",
"apiKey": "sk-xxx",
"apiBase": "https://api.openai.com/v1"
},
and both worked for CMD + I. This was on both the latest release and pre-release. If anyone is still having the issue with CMD+I I migth need more information to reproduce.
@0x2F4D2 for the autocomplete configuration that you shared there are few things to note:
- useLegacyCompletionsEndpoint belongs inside of the "tabAutocompleteModel" object. You've correctly identified that this is what will force Continue to call the /completions endpoint rather than /chat/completions
- The apiBase should be "https://api.openai.com/v1", not "https://api.openai.com/v1/completions". Continue will add the /completions on its own
- GPT-4 is really bad at autocomplete. If you're willing to use OpenAI, I'd guess you may also be willing to use Mistral. I would highly recommend Codestral, which is much much cheaper and will gives way better results
This error occured to me as well. Config file looks normal.
This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.
This issue was closed because it wasn't updated for 10 days after being marked stale. If it's still important, please reopen + comment and we'll gladly take another look!