Facebook iconMCP Practical Guide with STDIO Transport - F22 Labs
Blogs/AI

MCP Practical Guide with STDIO Transport

Written by Kiruthika
Reviewed by Ajay Patel
Sep 29, 2025
6 Min Read
MCP Practical Guide with STDIO Transport Hero

What if you could teach AI to search the internet or do math problems on its own? This guide shows you how to use something called MCP (Model Context Protocol) to give AI new abilities. We'll show you step-by-step how to build special connections between AI and tools using regular computer inputs and outputs. 

You'll learn how to make AI models like Claude use tools you create, all in real-time! Whether you want AI to look things up online or solve problems, this guide has everything you need to know. By the end, you'll be able to make AI do way more than it normally can!

Let's dive in and start building!

Installation and Setup

1. Install Required Packages

Create a virtual environment and run the following command to install the required dependencies:

python -m venv venv
source venv/bin/activate
pip install "mcp[cli]" anthropic python-dotenv requests
  • "mcp[cli]" – MCP client-server communication.
  • anthropic – API client for interacting with Claude models.
  • python-dotenv – Manages environment variables.
  • requests – To handle API requests.

2. Setting Up the .env File

Create a .env file and add your API keys:

SERPER_API_KEY=your_serper_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here

This ensures sensitive credentials remain secure.

Building the MCP Server

Let's begin by creating an MCP server that provides two functionalities:

  1. A web search tool using the Serper API.
  2. A basic addition function.

Server Code Breakdown

1. Import Required Modules

from mcp.server.fastmcp import FastMCP
import requests
import os
from dotenv import load_dotenv

load_dotenv()

mcp = FastMCP()
  • FastMCP: Initializes the MCP server.
  • dotenv: Loads API keys from the .env file.

2. Web Search Tool Using Serper API

Configuring Tools in MCP

  • In MCP, any function decorated with @mcp.tool() is treated as a tool. Each tool includes a description and input schema, which the LLM uses to determine the best match for a given user query. 
  • For example, if a user asks, “What is Model Context Protocol?”, the LLM identifies this as a search-related request and automatically selects the appropriate serper_search tool.
Implementing MCP with stdio Transport
Walk through a complete MCP stdio integration and understand data flow between agent and host.
Murtuza Kutub
Murtuza Kutub
Co-Founder, F22 Labs

Walk away with actionable insights on AI adoption.

Limited seats available!

Calendar
Saturday, 8 Nov 2025
10PM IST (60 mins)

For example:

API_KEY = os.getenv("SERPER_API_KEY")
API_URL = "https://google.serper.dev/search"

@mcp.tool()
def serper_search(query: str) -> dict:
    """Search the web using Serper API for user queries"""
    headers = {"X-API-KEY": API_KEY, "Content-Type": "application/json"}
    data = {"q": query}
    try:
        response = requests.post(API_URL, json=data, headers=headers)
        response.raise_for_status()
        result = response.json()
        print(f"Search result for '{query}': {result}")
        return result
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return {"error": str(e)}
  • Takes user queries and fetches search results from the Serper API.

3. Basic Arithmetic Tool

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    print(f"Adding {a} and {b}")
    return a + b

4. Running the MCP Server with stdio transport

  • The stdio transport facilitates communication via standard input and output streams. It's especially effective for local integrations and command-line applications.
if __name__ == "__main__":
    mcp.run(transport="stdio")

Using MCP Client with Standard Input/Output (stdio) Transport

  • With the stdio transport, you don’t need to run the server separately. You can start the client by providing the server script's path directly via the command line. This makes it convenient for testing and quick deployments.
  • Create a client.py file add the following the code

Code Walkthrough

1. Importing Required Libraries

import asyncio
from typing import Optional
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from anthropic import Anthropic
from dotenv import load_dotenv

load_dotenv()
  • AsyncExitStack: Ensures proper resource cleanup.
  • ClientSession: Manages the connection to the server.
  • stdio_client: Sets up stdio transport.
  • Anthropic: Handles natural language processing.
  • .env File: Used for securely loading environment variables.

2. Defining the MCP Client Class

class MCPClient:
    def __init__(self):
        self.session: Optional[ClientSession] = None
        self.exit_stack = AsyncExitStack()
        self.anthropic = Anthropic()
  • Purpose: Initializes the client session and sets up necessary components.
  • AsyncExitStack: Manages async resources, ensuring proper cleanup.
  • Anthropic: Integrates the LLM (Claude model in this case).

3. Connecting to the MCP Server

async def connect_to_server(self, server_script_path: str):
    """Connect to an MCP server
    
    Args:
        server_script_path: Path to the server script (.py or .js)
    """
    # Determine script type
    is_python = server_script_path.endswith('.py')
    is_js = server_script_path.endswith('.js')
    
    if not (is_python or is_js):
        raise ValueError("Server script must be a .py or .js file")
    
    # Choose command based on file type
    command = "python" if is_python else "node"
    
    # Set up Stdio transport parameters
    server_params = StdioServerParameters(
        command=command,
        args=[server_script_path],
        env=None
    )
    
    # Establish stdio transport
    stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
    self.stdio, self.write = stdio_transport
    self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
    
    await self.session.initialize()
    
    # List available tools
    response = await self.session.list_tools()
    tools = response.tools
    print("\nConnected to server with tools:", [tool.name for tool in tools])
  • server_script_path: Path to the server script provided via the command line.
  • StdioServerParameters: Sets up how the client communicates with the server.
  • stdio_client(): Creates a standard I/O transport client.
  • list_tools(): Lists available tools on the server.

4. Processing Queries

async def process_query(self, query: str) -> str:
    """Process a query using Claude and available tools"""
    messages = [{"role": "user", "content": query}]
    
    # Get available tools from the server
    response = await self.session.list_tools()
    available_tools = [
        {
            "name": tool.name,
            "description": tool.description,
            "input_schema": tool.inputSchema
        } 
        for tool in response.tools
    ]
    
    # Generate response using Claude
    response = self.anthropic.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1000,
        messages=messages,
        tools=available_tools
    )
    
    tool_results = []
    final_text = []

    # Process each content piece in response
    for content in response.content:
        if content.type == 'text':
            final_text.append(content.text)
        
        elif content.type == 'tool_use':
            tool_name = content.name
            tool_args = content.input
            
            # Call the tool and get results
            result = await self.session.call_tool(tool_name, tool_args)
            tool_results.append({"call": tool_name, "result": result})
            final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
            
            # Update conversation history
            if hasattr(content, 'text') and content.text:
                messages.append({"role": "assistant", "content": content.text})
            messages.append({"role": "user", "content": result.content})
            
            # Generate follow-up response
            response = self.anthropic.messages.create(
                model="claude-3-5-sonnet-20241022",
                max_tokens=1000,
                messages=messages,
            )
            final_text.append(response.content[0].text)
    
    return "\n".join(final_text)
  • Dynamic Tool Selection: Tools are selected based on the user query and tool description.
  • Asynchronous Calls: Each tool is invoked without blocking the main loop.
  • Conversation History: Maintains context for better interaction.

5. Interactive Chat Loop

async def chat_loop(self):
    """Run an interactive chat loop"""
    print("\nMCP Client Started!")
    print("Type your queries or 'quit' to exit.")
    
    while True:
        try:
            query = input("\nQuery: ").strip()
            
            if query.lower() == 'quit':
                break
                
            response = await self.process_query(query)
            print("\n" + response)
                
        except Exception as e:
            print(f"\nError: {str(e)}")
  • Interactive Mode: Keeps running until the user types quit.
  • Error Handling: Catches and prints exceptions

6. Cleaning Up Resources

async def cleanup(self):
    """Clean up resources"""
    await self.exit_stack.aclose()
  • Purpose: Ensures all resources are closed properly.
  • Context Manager: Automatically handles cleanup.
Implementing MCP with stdio Transport
Walk through a complete MCP stdio integration and understand data flow between agent and host.
Murtuza Kutub
Murtuza Kutub
Co-Founder, F22 Labs

Walk away with actionable insights on AI adoption.

Limited seats available!

Calendar
Saturday, 8 Nov 2025
10PM IST (60 mins)

7. Main Function

async def main():
    if len(sys.argv) < 2:
        print("Usage: python client.py <path_to_server_script>")
        sys.exit(1)
        
    client = MCPClient()
    try:
        await client.connect_to_server(sys.argv[1])
        await client.chat_loop()
    finally:
        await client.cleanup()

if __name__ == "__main__":
    import sys
    asyncio.run(main())
  • Command Line Argument: Expects the path to the server script.
  • asyncio.run(): Runs the async main function.

Why Use Stdio Transport?

  • No Need to Start Server Separately: The server is launched as a subprocess.
  • Simple & Efficient: Uses standard input/output for communication.
  • Quick Debugging: Ideal for development and testing.

Running the MCP Client

python client.py /path/to/serper_server.py
  • client.py: The MCP client script.
  • /path/to/serper_server.py: The MCP server script.

Type your queries and click enter in the terminal:

Query: What is Model Context Protocol?

To exit, type:

Query: quit
Output Terminal
Suggested Read- MCP Practical Guide with SSE Transport

Conclusion

I hope this guide has equipped you with the practical knowledge to implement MCP with STDIO transport! You've learned how to create seamless integration between AI models and custom tools using a straightforward communication approach. By following this guide, you can integrate MCP into AI-driven applications. This tutorial demonstrated: 

  • Setting up MCP Server & Client with Standard Input/Output (stdio)
  • Building AI-assisted tools
  • Using Claude & real-time tool execution

Need Expert Help?

Want to go beyond this tutorial and actually wire up MCP-powered tools to your own products? Many teams that hire AI developers come to us when they need to integrate custom servers, real-time model calls, or advanced transports like STDIO into production. Our specialists can help you design the MCP server/client architecture, connect it to models such as Claude, and build reliable tool-use pipelines so your AI can search the web, solve problems, and execute workflows on its own, safely and at scale.

Author-Kiruthika
Kiruthika

I'm an AI/ML engineer passionate about developing cutting-edge solutions. I specialize in machine learning techniques to solve complex problems and drive innovation through data-driven insights.

Share this article

Phone

Next for you

Qdrant vs Weaviate vs FalkorDB: Best AI Database 2025 Cover

AI

Nov 4, 20254 min read

Qdrant vs Weaviate vs FalkorDB: Best AI Database 2025

What if your AI application’s performance depended on one critical choice, the database powering it? In the era of vector search and retrieval-augmented generation (RAG), picking the right database can be the difference between instant, accurate results and sluggish responses. Three names dominate this space: Qdrant, Weaviate, and FalkorDB. Qdrant leads with lightning-fast vector search, Weaviate shines with hybrid AI features and multimodal support, while FalkorDB thrives on uncovering complex

AI PDF Form Detection: Game-Changer or Still Evolving? Cover

AI

Oct 31, 20253 min read

AI PDF Form Detection: Game-Changer or Still Evolving?

AI-based PDF form detection promises to transform static documents into interactive, fillable forms with minimal human intervention. Using computer vision and layout analysis, these systems automatically identify text boxes, checkboxes, radio buttons, and signature fields to reconstruct form structures digitally. The technology shows significant potential in streamlining document processing, reducing manual input, and improving efficiency across industries.  However, performance still varies wi

How to Use UV Package Manager for Python Projects Cover

AI

Oct 31, 20254 min read

How to Use UV Package Manager for Python Projects

Managing Python packages and dependencies has always been a challenge for developers. Tools like pip and poetry have served well for years, but as projects grow more complex, these tools can feel slow and cumbersome.  UV is a modern, high-performance Python package manager written in Rust, built as a drop-in replacement for pip and pip-tools. It focuses on speed, reliability, and ease of use rather than adding yet another layer of complexity. According to benchmarks from Astral, UV installs pac