Gemini CLI hangs with `query` command when using stdio-based MCP servers
What happened?
Gemini CLI hangs indefinitely when using the query command with stdio-based MCP (Model Context Protocol) servers. The server initializes correctly and responds to tools/list, but Gemini CLI never sends the subsequent tools/call request, causing the process to hang.
What did you expect to happen?
The MCP server should:
- Receive
initializerequest - Receive
notifications/initializednotification - Receive
tools/listrequest - Receive
tools/callrequest with the tool name and arguments - Process the tool call and return the result
Client information
───────────────────────────────────────────────────────────────────────────╮
│ │
│ About Gemini CLI │
│ │
│ CLI Version 0.9.0 │
│ Git Commit a93d92a3 │
│ Model gemini-2.5-pro │
│ Sandbox no sandbox │
│ OS linux │
│ Auth Method OAuth │
│ GCP Project gemini-code-#### │
│ │
╰───────────────────────────────────────────────────────────────────────────╯
Login information
No response
Anything else we need to know?
Environment
- Gemini CLI version: 0.9.0 (latest)
- OS: Linux (WSL2)
- Node.js: v22.20.0
Steps to Reproduce
-
Install the reference sequential-thinking MCP server:
gemini mcp add sequential-thinking npx -y @modelcontextprotocol/server-sequential-thinking -
Run with the
querycommand:gemini query "add 1 thought to our sequential-thinking mcp" -
Observe that the command hangs indefinitely after the MCP server initializes
Debug Logs
[DEBUG] CLI: Delegating hierarchical memory load to server for CWD: /home/local/projects/geminicli-usage (memoryImportFormat: tree)
[DEBUG] [MemoryDiscovery] Loading server hierarchical memory load to server for CWD: /home/local/projects/geminicli-usage (importFormat: tree)
[DEBUG] [MemoryDiscovery] Searching for GEMINI.md starting from CWD: /home/local/projects/geminicli-usage
[DEBUG] [MemoryDiscovery] Determined project root: /home/local/projects/geminicli-usage
[DEBUG] [BfsFileSearch] Scanning [1/200]: batch of 1
[DEBUG] [BfsFileSearch] Scanning [8/200]: batch of 7
[DEBUG] [BfsFileSearch] Scanning [16/200]: batch of 8
[DEBUG] [BfsFileSearch] Scanning [31/200]: batch of 15
[DEBUG] [BfsFileSearch] Scanning [46/200]: batch of 15
[DEBUG] [BfsFileSearch] Scanning [61/200]: batch of 15
[DEBUG] [BfsFileSearch] Scanning [76/200]: batch of 15
[DEBUG] [BfsFileSearch] Scanning [91/200]: batch of 15
[DEBUG] [BfsFileSearch] Scanning [104/200]: batch of 13
[DEBUG] [BfsFileSearch] Scanning [106/200]: batch of 2
[DEBUG] [MemoryDiscovery] Final ordered GEMINI.md paths to read: []
[DEBUG] [MemoryDiscovery] No GEMINI.md files found in hierarchy of the workspace.
Loaded cached credentials.
Flushing log events to Clearcut.
[AgentRegistry] Initialized with 1 agents.
[DEBUG] [MCP STDERR (sequential-thinking)]: Sequential Thinking MCP Server running on stdio
Flushing log events to Clearcut.
Session ID: e6d5f67e-b5a7-4c12-bc6f-f1accc709771
Workarounds
-
Interactive mode works correctly:
gemini > add 1 thought to our sequential-thinking mcp -
HTTP-based MCP servers work with
querycommand: Servers usingmcp-remoteas a proxy/bridge (which converts stdio to HTTP transport) work correctly with thequerycommand.
Root Cause Analysis
The issue is specific to stdio-based MCP servers. The problem appears to be in how Gemini CLI handles stdio communication with MCP servers when using the query command, as HTTP-based servers (accessed via mcp-remote) work correctly.