context provider for @selection that only provides currently selected code under cursor?
Validations
- [X] I believe this is a way to improve. I'll try to join the Continue Discord for questions
- [X] I'm not able to find an open issue that requests the same enhancement
Problem
The @code context requires selecting files
I was thinking of having a @selection context that will provide the code currently selected
Solution
No response
When the selected code is selected, is it better to automatically add the selected code during the session
@championswimmer curious to understand why this over ctrl+L? I can see the scenario perhaps where it reduces the necessary number of keystrokes/mouse movements
@sestinj This might be a habit issue. Tools like GitHub Copilot and Cody typically only require selecting the code. So, could automatically add the selected code to the context when code is selected?
Quite some time ago we automatically included the selected context but overwhelmingly got feedback that this made it difficult to select more than one range, and that many users frequently highlight code as they are reading it, which causes a lot of unnecessary flashing in the input box. Right now we're going to stick with using cmd+L, but if we hear enough feedback that there's a clear improvement to make, we'll definitely be willing to change!
In the meantime I do think the @ selection idea is a good one
Hello!
I missed the same feature (including currently selected content in chat) in Continue. In this way we can precisely provide most relevant code to LLM, which I certainly am more sure than letting LLM itself to pick out from provided whole file. And in my usecase a large percent of requests are in the following process: when I select relevant code and ask my question without any instruction mark, the plugin I use (ali tongyilingma most, which is based on Qwen LLM) will automatically insert selected code below my question. I feel this very convenient, though in few cases I do wrongly select irrelevant code, but it does not make more trouble than convenience it brings. So I think it's worth supporting implicit insertion of selected code into chat (without explicit @selection mark), or maybe add an option to let user choose?
Regarding providing multiple selection code ranges, I think this is a useful feature, though maybe not so frequently used. It can solve some problem, without which it would be hard or even impossible to instruct LLM to do thing right. (because providing too few or too much information). I think multiple selection is most useful when:
- working on larger source file, where relevant code may not stay together so you cannot include them with one selection range, and if you select them all in one range you would include too much irrelevant code, which may confuse LLM.
- or well structured code base which tends to cut code into Single Responsibility pieces, but in order to resolve actual problems you have to chain pieces together from different places.
Regarding how to add single or multiple selected code range into chat, I agree @selection is a good way. when input @selection in chat window, maybe the plugin can record current selected code range inline, just like follows:
I defined XXX in following code:
@selectionfile1.py: 25~50, but it reports YYY problem in following code:@selectionfile2.py: 100~120. Help me check what's wrong.
in which file1.py: 25~50 and file2.py: 100~120 is automatically extracted from selection range at the moment @selection is input into chat box. And when chat content is sent to LLM, the plugin automatially replace @selection marks with corresponding code.
It may seem a little cumbersome, but I think it's useful when needed, just like cherrypicking in git.
This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.
This issue was closed because it wasn't updated for 10 days after being marked stale. If it's still important, please reopen + comment and we'll gladly take another look!