Update system prompt
What I Did
Referred to the leaked system prompts of tools like Cursor, Windsurf, Trae, and other agentic IDEs, and crafted a specialized system prompt. This new prompt is designed to make the model strictly adhere to both agentic-mode instructions and the user's intent, improving the overall developer experience.
What It Changes
The default system prompt isn't strict or directive enough for smaller, local, or non-agent-aware models. Such models often ask multiple clarifying questions (5–6 on average) before beginning the task.
This new prompt was designed and tuned specifically for:
- OpenAI’s
Oseries (e.g., GPT-4o) - Locally hosted models
With this change, the model is capable of initiating full project generation from a single, high-level prompt, rather than requesting further clarification.
Key Improvements
-
Enables models to actually modify files instead of only generating code blocks in responses.
-
Makes the system prompt more actionable, reducing friction for developers and enabling real-time coding experiences.
My Opinion
Compared to the default system prompt, this enhanced version is a major step forward — especially for locally hosted or prompt-sensitive models. While it still isn’t on par with Cursor or Windsurf’s prompt-engineering depth, it’s a substantial improvement.
👉 Please review and consider integrating this updated system prompt in the next release.
@adyanthm Can I ask one thing on this change ?
As I read PR, a lot of prompts are updated for agent, finally agentSystemMessage().
I could find chat_systemMessage() is used for chat mode, but couldn't find where agentSystemMessage() is called.
Where is agentSystemMessage() used ?
https://github.com/voideditor/void/blob/a7eb54a09bd264360e91690548ecc887ae4655a9/src/vs/workbench/contrib/void/browser/convertToLLMMessageService.ts#L596
// system message
private _generateChatMessagesSystemMessage = async (chatMode: ChatMode, specialToolFormat: 'openai-style' | 'anthropic-style' | 'gemini-style' | undefined) => {
...
const systemMessage = chat_systemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, chatMode, mcpTools, includeXMLToolDefinitions })
return systemMessage
}
Should agentSystemMessage() be called in above line by checking it's mode agent ?
const systemMessage = chatMode === 'agent'
? agentSystemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, mcpTools, includeXMLToolDefinitions })
: chat_systemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, chatMode, mcpTools, includeXMLToolDefinitions })
If I am wrong, please let me know.
Oh yes sorry! I have mentioned that my prompt is specifically modded for agent. Its from my modded fork. I have only agent mode in my fork so i just set the system prompt without checking for agent mode. Can you fix it??
It seems that I couldn't add commit in your PR and I just share the change. As I mentioned, the following change is using updated prompt in agent mode, and still uses legacy prompt for other mode (chat and gather).
https://github.com/voideditor/void/blob/a7eb54a09bd264360e91690548ecc887ae4655a9/src/vs/workbench/contrib/void/browser/convertToLLMMessageService.ts#L596
// before
const systemMessage = chat_systemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, chatMode, mcpTools, includeXMLToolDefinitions })
// after
import { ..., agentSystemMessage } from "../common/prompt/prompts.js";
...
const systemMessage = chatMode === 'agent'
? agentSystemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, mcpTools, includeXMLToolDefinitions })
: chat_systemMessage({ workspaceFolders, openedURIs, directoryStr, activeURI, persistentTerminalIDs, chatMode, mcpTools, includeXMLToolDefinitions })
You can check systemMessage using console.log(systemMessage) in chat, gather and agent mode.
I am busy with another project at moment. I gave you collab access to my forked repo. Please commit the changes there if you can.
applied it.
@andrewpareles Look into this and merge this asap.