Blogs/AI

How to Use MCP with SSE Transport (Practical Guide)

Written by Kiruthika
Reviewed by Ajay Patel
Apr 21, 2026
5 Min Read
How to Use MCP with SSE Transport (Practical Guide) Hero

Integrating external data sources with AI models often turns into custom glue code that is hard to maintain and difficult to standardize. I put this guide together to show how the Model Context Protocol (MCP) reduces that complexity by defining a consistent, transport-agnostic way for AI systems to interact with tools and services.

In this guide, we walk through building an MCP server and MCP client using Server-Sent Events (SSE), with clear, step-by-step instructions to set up and run both in a practical, real-world configuration.

What is MCP?

MCP is a standardized protocol that allows AI tools to interact with content repositories, business platforms, and development environments through a unified interface. By defining a common framework for these interactions, MCP improves the relevance, reliability, and context-awareness of AI applications.

It enables developers to build modular, secure, and flexible integrations without creating separate connectors for each data source.

With MCP, developers can:

  • Standardize AI interactions across tools and APIs
  • Maintain context while switching between applications
  • Improve modularity and composability in AI-powered systems

How to Install and Set Up MCP with SSE Transport

Before running MCP with SSE transport, we need to install the required dependencies and set up environment variables.

1. Install Required Packages

Create virtual environment and run the following command to install the required dependencies:

python -m venv venv
source venv/bin/activate
pip install "mcp[cli]" anthropic python-dotenv requests
  • "mcp[cli]" – MCP client-server communication.
  • anthropic – API client for interacting with Claude models.
  • python-dotenv – To manage environment variables.
  • requests – To handle API requests.

2. Setting Up the .env File

Create a .env file in the project directory and add your API keys:

SERPER_API_KEY=your_serper_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here

This ensures sensitive credentials remain secure.

Building the MCP Server

Let's begin by creating an MCP server that provides two functionalities:

  1. A web search tool using the Serper API.
  2. A basic addition function.

Server Code Breakdown

1. Import Required Modules

from mcp.server.fastmcp import FastMCP
import requests
import os
from dotenv import load_dotenv

load_dotenv()

mcp = FastMCP()
  • FastMCP: Initialises the MCP server.
  • dotenv: Loads API keys from the .env file.

2. Web Search Tool Using Serper API

Configuring Tools in MCP

Implementing MCP with SSE Transport
Learn how SSE transport enables real-time streaming between tools and LLMs under MCP protocol.
Murtuza Kutub
Murtuza Kutub
Co-Founder, F22 Labs

Walk away with actionable insights on AI adoption.

Limited seats available!

Calendar
Saturday, 25 Apr 2026
10PM IST (60 mins)

In MCP, each function wrapped with the @mcp.tool() decorator is considered a tool. This makes it easy to modularise functionalities. The description and input schema of the tool help the LLM decide which tool to use based on the user’s query.

  • The LLM inspects the tool descriptions and input schema to match the appropriate tool to the user’s query.
  • For instance, if a user asks “Add 5 and 10?", the LLM recognises this as a math operation and selects the add tool automatically.

For example:

API_KEY = os.getenv("SERPER_API_KEY")
API_URL = "https://google.serper.dev/search"

@mcp.tool()
def serper_search(query: str) -> dict:
    """Search the web using Serper API for user queries"""
    headers = {"X-API-KEY": API_KEY, "Content-Type": "application/json"}
    data = {"q": query}
    try:
        response = requests.post(API_URL, json=data, headers=headers)
        response.raise_for_status()
        result = response.json()
        print(f"Search result for '{query}': {result}")
        return result
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return {"error": str(e)}
  • Takes user queries and fetches search results from the Serper API.
  • Handles API errors gracefully and returns structured results.

3. Basic Arithmetic Tool

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    print(f"Adding {a} and {b}")
    return a + b
  • Simple function to demonstrate adding tools to MCP.

4. Running the MCP Server with sse transport

  • SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
if __name__ == "__main__":
    print("MCP server is running on port 8000")
    mcp.run(transport="sse")
Output Terminal

Building the MCP Client with SSE Transport

SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication. 

The client will:

  • Connect to the MCP server.
  • Use the Claude API to process natural language queries.
  • Identify and call appropriate tools dynamically.

MCP Client Code Breakdown for SSE Transport

1. Import Required Modules

Create a file named client.py and save the following code.

import asyncio
from typing import Optional
from contextlib import AsyncExitStack

from mcp import ClientSession
from mcp.client.sse import sse_client
from anthropic import Anthropic
from dotenv import load_dotenv

load_dotenv()
  • asyncio: Handles asynchronous tasks.
  • mcp.ClientSession: Manages client-server interactions.
  • anthropic.Anthropic: Enables LLM-based processing.

2. Define MCP Client Class

MCP_SERVER_URL = "http://localhost:8000/sse"

class MCPClient:
    def __init__(self):
        self.session: Optional[ClientSession] = None
        self.exit_stack = AsyncExitStack()
        self.anthropic = Anthropic()
  • Handles connection lifecycle for interacting with the MCP server.

3. Connect to the MCP Server

async def connect_to_server(self, url: str):
    """Connect to an MCP SSE server"""
    streams = await self.exit_stack.enter_async_context(sse_client(url=url))
    self.session = await self.exit_stack.enter_async_context(ClientSession(*streams))

    await self.session.initialize()

    response = await self.session.list_tools()
    tools = response.tools
    print("\nConnected to server with tools:", [tool.name for tool in tools]
  • Establishes connection and retrieves available tools.

4. Process Queries Using Claude & MCP Tools

async def process_query(self, query: str) -> str:
    messages = [{"role": "user", "content": query}]

    response = await self.session.list_tools()
    available_tools = [
        {"name": tool.name, "description": tool.description, "input_schema": tool.inputSchema} 
        for tool in response.tools
    ]

    response = self.anthropic.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1000,
        messages=messages,
        tools=available_tools
    )

    tool_results = []
    final_text = []

    for content in response.content:
        if content.type == "text":
            final_text.append(content.text)
        elif content.type == "tool_use":
            tool_name = content.name
            tool_args = content.input

            result = await self.session.call_tool(tool_name, tool_args)
            tool_results.append({"call": tool_name, "result": result})
            final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")

            messages.append({"role": "user", "content": result.content})

            response = self.anthropic.messages.create(
                model="claude-3-5-sonnet-20241022",
                max_tokens=1000,
                messages=messages,
            )
            final_text.append(response.content[0].text)

    return "\n".join(final_text)
  • Checks available tools, calls them dynamically, and interacts with Claude.

5. Interactive Chat Loop

async def chat_loop(self):
    print("\nMCP SSE Client Started!")
    print("Type your queries or 'quit' to exit.")

    while True:
        query = input("\nQuery: ").strip()
        if query.lower() == "quit":
            break
        response = await self.process_query(query)
        print("\n" + response)
  • Runs an interactive CLI where users can ask queries.

Running the MCP Client

Once the server is running, start the client:

python client.py 

Type queries like:

Query: Add 9 and 11

To exit, type:

Query: quit
Output Terminal

Conclusion

This walkthrough showed how MCP with SSE transport can be used to build AI systems that stream data, invoke tools dynamically, and stay context-aware without tightly coupling logic across services. By separating tool execution from model reasoning, MCP makes these integrations easier to extend and safer to operate.

Implementing MCP with SSE Transport
Learn how SSE transport enables real-time streaming between tools and LLMs under MCP protocol.
Murtuza Kutub
Murtuza Kutub
Co-Founder, F22 Labs

Walk away with actionable insights on AI adoption.

Limited seats available!

Calendar
Saturday, 25 Apr 2026
10PM IST (60 mins)

Whether you use SSE or explore alternatives like STDIO transport, the core value of MCP remains the same: a standardized, modular approach to connecting AI models with real-world systems in a maintainable way.

Frequently Asked Questions

1. What is MCP with SSE transport?

MCP with SSE transport uses Server-Sent Events to stream data between AI clients and MCP servers in real time over HTTP.

2. When should I use SSE transport in MCP?

Use MCP with SSE transport when you need real-time updates, remote connections, browser compatibility, or web-based AI applications.

3. What are the benefits of MCP SSE transport?

It offers persistent streaming, simpler real-time communication, easier web integration, and efficient server-to-client updates.

4. Is SSE better than STDIO for MCP?

SSE is often better for remote or web deployments, while STDIO is simpler for local development and CLI-based workflows.

5. Can Claude work with MCP SSE transport?

Yes. Claude can connect through MCP clients and use tools or external systems over SSE transport.

6. Is MCP SSE transport production-ready?

Yes, SSE is commonly used for production systems that need streaming responses and stable HTTP-based integrations.

Author-Kiruthika
Kiruthika

I'm an AI/ML engineer passionate about developing cutting-edge solutions. I specialize in machine learning techniques to solve complex problems and drive innovation through data-driven insights.

Share this article

Phone

Next for you

Active vs Total Parameters: What’s the Difference? Cover

AI

Apr 10, 20264 min read

Active vs Total Parameters: What’s the Difference?

Every time a new AI model is released, the headlines sound familiar. “GPT-4 has over a trillion parameters.” “Gemini Ultra is one of the largest models ever trained.” And most people, even in tech, nod along without really knowing what that number actually means. I used to do the same. Here’s a simple way to think about it: parameters are like knobs on a mixing board. When you train a neural network, you're adjusting millions (or billions) of these knobs so the output starts to make sense. M

Cost to Build a ChatGPT-Like App ($50K–$500K+) Cover

AI

Apr 7, 202610 min read

Cost to Build a ChatGPT-Like App ($50K–$500K+)

Building a chatbot app like ChatGPT is no longer experimental; it’s becoming a core part of how products deliver support, automate workflows, and improve user experience. The mobile app development cost to develop a ChatGPT-like app typically ranges from $50,000 to $500,000+, depending on the model used, infrastructure, real-time performance, and how the system handles scale. Most guides focus on features, but that’s not what actually drives cost here. The real complexity comes from running la

How to Build an AI MVP for Your Product Cover

AI

Apr 16, 202613 min read

How to Build an AI MVP for Your Product

I’ve noticed something while building AI products: speed is no longer the problem, clarity is. Most MVPs fail not because they’re slow, but because they solve the wrong problem. In fact, around 42% of startups fail due to a lack of market need. Building an AI MVP is not just about testing features; it’s about validating whether AI actually adds value. Can it automate something meaningful? Can it improve decisions or user experience in a way a simple system can’t? That’s where most teams get it