continue icon indicating copy to clipboard operation
continue copied to clipboard

What is the plan moving forward for config.ts file?

Open foongzy opened this issue 9 months ago • 6 comments

Validations

  • [x] I believe this is a way to improve. I'll try to join the Continue Discord for questions
  • [x] I'm not able to find an open issue that requests the same enhancement

Problem

Hi,

I rely heavily on config.ts file for very custom configurations. However, I noticed its been deprecated along with config.json. Can I check, moving forward, what is the plan for config.ts file?

Eg Will there be a replacement for it? Or if it will be totally deprecated, when is that expected to happen?

Solution

No response

foongzy avatar Apr 21 '25 06:04 foongzy

Hi @foongzy. Can you elaborate more about the custom configurations you have in your config.ts file? This will help us understand if we have plan for your use cases

TyDunn avatar Apr 21 '25 17:04 TyDunn

Hello @TyDunn. We are rely on config.ts too and that is why we still can't move to config.yaml from json version. We are using it to provide access to our model api — unfortunately it hasn't openai compatible format, it doesn't support system role now.

migs911 avatar Apr 21 '25 21:04 migs911

Hi @TyDunn, we create our own backend and logic that does custom processing of queries and adding any relevant context before sending to LLM endpoint. We then stream the response back to the IDE. So in config.ts, I am doing a custom streaming API call (fetch) to my backend and my backend will return the response.

foongzy avatar Apr 21 '25 23:04 foongzy

While I do not use config.ts for that, I'd like to have this feature. Rn we use a local proxy to do this mapping, but it requires an extra work as people need to setup and run it too. I'd like to have a way to build my private provider w/o having to fork the repo.

jpimentel-ciandt avatar Apr 22 '25 12:04 jpimentel-ciandt

We also use a proxy for all of our application, LiteLLM, to monitor team usage and provide routing and fallbacks. With the config.ts, I could provide a custom endpoint and custom api key for LiteLLM and just define which models we wished to use in continue.dev. I now see that GitHub co-pilot allows for this functionality but would prefer not to have to switch others over to it. Please allow for customization and not rely exclusively on pre-defined endpoints.

We manually rolled-back to 1.0.5 and everything works as expected.

Otts86 avatar Apr 22 '25 19:04 Otts86

Hi @TyDunn, if you can provide an update, that would be great. Seems like many are still relying heavily on the config.ts file for customisation.

foongzy avatar Apr 28 '25 04:04 foongzy

I also rely solely on config.ts. Example adding custom command to use OpenAI Assistants. When I upgrade to latest version, it stopped working. Hence I'm using a lower version i.e 1.0.4. However, I want to upgrade to latest version of Continue without discontinued usage of JSON and ts file.

SourabhRn2010 avatar May 06 '25 14:05 SourabhRn2010

I talked through this with the team, and here is our latest thinking:

  • We have deprecated config.json, so everyone will need to migrate to config.yaml eventually

  • We plan to no longer support config.ts because it is difficult for us to maintain, and we want to help our users follow security best practices (e.g. not running arbitrary code). In addition, most of our users achieve a lot of what they used to do with config.ts using MCP and / or OpenAI compatible APIs, which didn't exist two years ago when we originally designed config.ts

  • @migs911 @foongzy Our recommendation would be to either make your model provider OpenAI compatible or open a pull request with the format you need, assuming that others in the community would benefit from it

  • @jpimentel-ciandt Can you share more about what you mean by "mapping"? This will help us to potentially suggest alternatives solutions

  • @Otts86 LiteLLM is OpenAI compatible. If you set the apiBase for the OpenAI model provider as the URL of your proxy, it should work. Continue is and has always been designed to allow you to use any mixture of models locally, behind enterprise firewalls, within secure VPCs, from your cloud provider, from SaaS providers, etc. This "distributed intelligence" is the way of the future in our opinion

  • @SourabhRn2010 Continue is not designed to work with OpenAI Assistants. We think it's unlikely that OpenAI moves them from beta to general availability. With the combination of Agent Mode, MCP, and custom AI code assistants, we think that users will have more reliable and powerful experience

Please let me know what y'all think. We want to enable all of you have the customization you need in a way that is both secure and sustainable for us to maintain Continue 👍

TyDunn avatar May 06 '25 20:05 TyDunn

This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.

github-actions[bot] avatar Aug 05 '25 02:08 github-actions[bot]