mcp_client icon indicating copy to clipboard operation
mcp_client copied to clipboard

MCP Client Implementation using Python, LangGraph and Gemini

πŸš€ MCP Client with Gemini AI

πŸ“’ Subscribe to The AI Language on YouTube!

Welcome! This project features multiple MCP clients integrated with Google Gemini AI to execute tasks via the Model Context Protocol (MCP) β€” with and without LangChain.

Happy building, and don’t forget to subscribe!

MCP Client Options

This repository includes four MCP client options for various use cases:

Option Client Script LangChain Config Support Transport Tutorial
1 client.py ❌ ❌ STDIO Legacy Client
2 langchain_mcp_client.py βœ… ❌ STDIO LangChain Client
3 langchain_mcp_client_wconfig.py βœ… βœ… STDIO Multi-Server
4 client_sse.py ❌ ❌ SSE (Loca & Web) SSE Client

If you want to add or reuse MCP Servers, check out the MCP Servers repo.


βœͺ Features

βœ… Connects to an MCP server (STDIO or SSE)
βœ… Uses Google Gemini AI to interpret user prompts
βœ… Allows Gemini to call MCP tools via server
βœ… Executes tool commands and returns results
βœ… (Upcoming) Maintains context and history for conversations


Running the MCP Client

Choose the appropriate command for your preferred client:

  • Legacy STDIO β€” uv run client.py path/to/server.py
  • LangChain STDIO β€” uv run langchain_mcp_client.py path/to/server.py
  • LangChain Multi-Server STDIO β€” uv run langchain_mcp_client_wconfig.py path/to/config.json
  • SSE Client β€” uv run client_sse.py sse_server_url

Project Structure

mcp-client-gemini/
β”œβ”€β”€ client.py                        # Basic client (STDIO)
β”œβ”€β”€ langchain_mcp_client.py         # LangChain + Gemini
β”œβ”€β”€ langchain_mcp_client_wconfig.py # LangChain + config.json (multi-server)
β”œβ”€β”€ client_sse.py                   # SSE transport client (local or remote)
β”œβ”€β”€ .env                            # API key environment file
β”œβ”€β”€ README.md                       # Project documentation
β”œβ”€β”€ requirements.txt                # Dependency list
β”œβ”€β”€ .gitignore                      # Git ignore rules
β”œβ”€β”€ LICENSE                         # License information

How It Works

  1. You send a prompt:

    Create a file named test.txt

  2. The prompt is sent to Google Gemini AI.
  3. Gemini uses available MCP tools to determine a response.
  4. The selected tool is executed on the connected server.
  5. The AI returns results and maintains conversation context (if supported).

🀝 Contributing

At this time, this project does not accept external code contributions.

This is to keep licensing simple and avoid any shared copyright.

You're very welcome to: βœ… Report bugs or request features (via GitHub Issues)
βœ… Fork the repo and build your own version
βœ… Suggest documentation improvements

If you'd like to collaborate in another way, feel free to open a discussion!