void icon indicating copy to clipboard operation
void copied to clipboard

[Bug] Agent read_file tool returns truncated

Open peteloggin opened this issue 7 months ago • 6 comments

  1. VSCode Version: 1.99.30039 (system setup) Void Version: 1.4.4 Commit: da425ab0fe34edfc3c082367edff7e05e0719b2d Date: 2025-06-10T00:36:08.103Z Electron: 34.3.2 ElectronBuildId: undefined Chromium: 132.0.6834.210 Node.js: 20.18.3 V8: 13.2.152.41-electron.0 OS: Windows_NT x64 10.0.26100

I'm reading and writing from a local directory. (one folder on desktop, with one file sudoku_gui.py). I'm using Qwen3-32B-Q4_K_M.gguf from a lmstudio server on my network

  1. I finally realized that when models tried to use the read_file tool they weren't getting the full file.

Excerpt from qwen3 when I asked it to read and explain my file.

First, I need to make sure I have the full content of the file. The last tool call used read_file on sudoku_gui.py, but the result seems cut off with "def __in...". That's probably because the file is large or there was a truncation.

the file is not large.

I've also had similar issues when I asked for simple edits like changing variable names and llms unable to use the search/replace tool because it didn't seem to know the original code.

peteloggin avatar Jun 13 '25 20:06 peteloggin

You need to edit the MAX_FILE_CHARS_PAGE and maxCharsPerFile in the file prompts.ts so that it can read the full file. The default values for them are very restricting. Here are my configs :

  • MAX_FILE_CHARS_PAGE : 750,000 characters per file view
  • maxCharsPerFile : 2,000,000 characters maximum
  • MAX_TERMINAL_CHARS : 100,000 characters for terminal output

To ensure agentic behavior, consider:

  1. Starting fresh conversations for complex tasks
  2. Keeping individual requests concise

adyanthm avatar Jun 21 '25 08:06 adyanthm

the file is located at src\vs\workbench\contrib\void\common\prompt\prompts.ts

adyanthm avatar Jun 21 '25 08:06 adyanthm

Thanks for sharing your current configs. We didn't want to overflow the context with a huge file read that potentially doesn't mean a lot to an LLM (e.g. if the LLM is just "curious" what the file does, and goes to read it, but it's 100GB or something), but we should probably increase these numbers for Normal Chat mode!

andrewpareles avatar Jun 29 '25 05:06 andrewpareles

Hey @andrewpareles I think it would be a good feature if we could configure these configs in settings itself. I wouldn't have built from source if that option existed!

adyanthm avatar Jun 29 '25 07:06 adyanthm

Definitely, adding this to the roadmap.

andrewpareles avatar Jun 29 '25 22:06 andrewpareles

Hey, I decided to address this issue (along with #728). Let me know if there's any issues!

  • #867

rohitrrane avatar Aug 08 '25 15:08 rohitrrane