What is the plan moving forward for config.ts file?
Validations
- [x] I believe this is a way to improve. I'll try to join the Continue Discord for questions
- [x] I'm not able to find an open issue that requests the same enhancement
Problem
Hi,
I rely heavily on config.ts file for very custom configurations. However, I noticed its been deprecated along with config.json. Can I check, moving forward, what is the plan for config.ts file?
Eg Will there be a replacement for it? Or if it will be totally deprecated, when is that expected to happen?
Solution
No response
Hi @foongzy. Can you elaborate more about the custom configurations you have in your config.ts file? This will help us understand if we have plan for your use cases
Hello @TyDunn. We are rely on config.ts too and that is why we still can't move to config.yaml from json version. We are using it to provide access to our model api — unfortunately it hasn't openai compatible format, it doesn't support system role now.
Hi @TyDunn, we create our own backend and logic that does custom processing of queries and adding any relevant context before sending to LLM endpoint. We then stream the response back to the IDE. So in config.ts, I am doing a custom streaming API call (fetch) to my backend and my backend will return the response.
While I do not use config.ts for that, I'd like to have this feature. Rn we use a local proxy to do this mapping, but it requires an extra work as people need to setup and run it too. I'd like to have a way to build my private provider w/o having to fork the repo.
We also use a proxy for all of our application, LiteLLM, to monitor team usage and provide routing and fallbacks. With the config.ts, I could provide a custom endpoint and custom api key for LiteLLM and just define which models we wished to use in continue.dev. I now see that GitHub co-pilot allows for this functionality but would prefer not to have to switch others over to it. Please allow for customization and not rely exclusively on pre-defined endpoints.
We manually rolled-back to 1.0.5 and everything works as expected.
Hi @TyDunn, if you can provide an update, that would be great. Seems like many are still relying heavily on the config.ts file for customisation.
I also rely solely on config.ts. Example adding custom command to use OpenAI Assistants. When I upgrade to latest version, it stopped working. Hence I'm using a lower version i.e 1.0.4. However, I want to upgrade to latest version of Continue without discontinued usage of JSON and ts file.
I talked through this with the team, and here is our latest thinking:
-
We have deprecated
config.json, so everyone will need to migrate toconfig.yamleventually -
We plan to no longer support
config.tsbecause it is difficult for us to maintain, and we want to help our users follow security best practices (e.g. not running arbitrary code). In addition, most of our users achieve a lot of what they used to do withconfig.tsusing MCP and / or OpenAI compatible APIs, which didn't exist two years ago when we originally designedconfig.ts -
@migs911 @foongzy Our recommendation would be to either make your model provider OpenAI compatible or open a pull request with the format you need, assuming that others in the community would benefit from it
-
@jpimentel-ciandt Can you share more about what you mean by "mapping"? This will help us to potentially suggest alternatives solutions
-
@Otts86 LiteLLM is OpenAI compatible. If you set the
apiBasefor the OpenAI model provider as the URL of your proxy, it should work. Continue is and has always been designed to allow you to use any mixture of models locally, behind enterprise firewalls, within secure VPCs, from your cloud provider, from SaaS providers, etc. This "distributed intelligence" is the way of the future in our opinion -
@SourabhRn2010 Continue is not designed to work with OpenAI Assistants. We think it's unlikely that OpenAI moves them from beta to general availability. With the combination of Agent Mode, MCP, and custom AI code assistants, we think that users will have more reliable and powerful experience
Please let me know what y'all think. We want to enable all of you have the customization you need in a way that is both secure and sustainable for us to maintain Continue 👍
This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.