Facebook iconMCP Practical Guide with SSE Transport - F22 Labs
Blogs/AI

MCP Practical Guide with SSE Transport

Written by Kiruthika
Reviewed by Ajay Patel
Sep 11, 2025
5 Min Read
MCP Practical Guide with SSE Transport Hero

Integrating external data sources with AI models often requires complex and time-consuming custom coding. The Model Context Protocol (MCP) simplifies this by offering a standardised framework for seamless interaction.

In this guide, we will walk through building an MCP server and MCP client with Server-Sent Events (SSE), providing step-by-step instructions to set up and run both.

What is MCP?

MCP serves as a universal interface that allows AI tools to interact seamlessly with content repositories, business platforms, and development environments. By providing a standardized framework, MCP enhances the relevance and context-awareness of AI applications. 

It enables developers to build modular, secure, and flexible integrations without the need for separate connectors for each data source.

With MCP, developers can:

  • Standardize AI interactions with various tools and APIs.
  • Maintain context while switching between applications.
  • Improve modularity and composability in AI-powered systems.

Installation and Setup

Before running the MCP server and client, we need to install the required dependencies and set up environment variables.

1. Install Required Packages

Create virtual environment and run the following command to install the required dependencies:

python -m venv venv
source venv/bin/activate
pip install "mcp[cli]" anthropic python-dotenv requests
  • "mcp[cli]" – MCP client-server communication.
  • anthropic – API client for interacting with Claude models.
  • python-dotenv – To manage environment variables.
  • requests – To handle API requests.

2. Setting Up the .env File

Create a .env file in the project directory and add your API keys:

SERPER_API_KEY=your_serper_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here

This ensures sensitive credentials remain secure.

Building the MCP Server

Let's begin by creating an MCP server that provides two functionalities:

  1. A web search tool using the Serper API.
  2. A basic addition function.

Server Code Breakdown

1. Import Required Modules

from mcp.server.fastmcp import FastMCP
import requests
import os
from dotenv import load_dotenv

load_dotenv()

mcp = FastMCP()
  • FastMCP: Initialises the MCP server.
  • dotenv: Loads API keys from the .env file.
Innovations in AI
Exploring the future of artificial intelligence
Murtuza Kutub
Murtuza Kutub
Co-Founder, F22 Labs

Walk away with actionable insights on AI adoption.

Limited seats available!

Calendar
Saturday, 18 Oct 2025
10PM IST (60 mins)

2. Web Search Tool Using Serper API

Configuring Tools in MCP

In MCP, each function wrapped with the @mcp.tool() decorator is considered a tool. This makes it easy to modularise functionalities. The description and input schema of the tool help the LLM decide which tool to use based on the user’s query.

  • The LLM inspects the tool descriptions and input schema to match the appropriate tool to the user’s query.
  • For instance, if a user asks “Add 5 and 10?", the LLM recognises this as a math operation and selects the add tool automatically.

For example:

API_KEY = os.getenv("SERPER_API_KEY")
API_URL = "https://google.serper.dev/search"

@mcp.tool()
def serper_search(query: str) -> dict:
    """Search the web using Serper API for user queries"""
    headers = {"X-API-KEY": API_KEY, "Content-Type": "application/json"}
    data = {"q": query}
    try:
        response = requests.post(API_URL, json=data, headers=headers)
        response.raise_for_status()
        result = response.json()
        print(f"Search result for '{query}': {result}")
        return result
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return {"error": str(e)}
  • Takes user queries and fetches search results from the Serper API.
  • Handles API errors gracefully and returns structured results.

3. Basic Arithmetic Tool

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    print(f"Adding {a} and {b}")
    return a + b
  • Simple function to demonstrate adding tools to MCP.

4. Running the MCP Server with sse transport

  • SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
if __name__ == "__main__":
    print("MCP server is running on port 8000")
    mcp.run(transport="sse")
Output Terminal

Building the MCP Client with Server-Sent Events (SSE) transport

SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication. 

The client will:

  • Connect to the MCP server.
  • Use the Claude API to process natural language queries.
  • Identify and call appropriate tools dynamically.

Client Code Breakdown

1. Import Required Modules

Create a file named client.py and save the following code.

import asyncio
from typing import Optional
from contextlib import AsyncExitStack

from mcp import ClientSession
from mcp.client.sse import sse_client
from anthropic import Anthropic
from dotenv import load_dotenv

load_dotenv()
  • asyncio: Handles asynchronous tasks.
  • mcp.ClientSession: Manages client-server interactions.
  • anthropic.Anthropic: Enables LLM-based processing.

2. Define MCP Client Class

MCP_SERVER_URL = "http://localhost:8000/sse"

class MCPClient:
    def __init__(self):
        self.session: Optional[ClientSession] = None
        self.exit_stack = AsyncExitStack()
        self.anthropic = Anthropic()
  • Handles connection lifecycle for interacting with the MCP server.

3. Connect to the MCP Server

async def connect_to_server(self, url: str):
    """Connect to an MCP SSE server"""
    streams = await self.exit_stack.enter_async_context(sse_client(url=url))
    self.session = await self.exit_stack.enter_async_context(ClientSession(*streams))

    await self.session.initialize()

    response = await self.session.list_tools()
    tools = response.tools
    print("\nConnected to server with tools:", [tool.name for tool in tools]
  • Establishes connection and retrieves available tools.
Innovations in AI
Exploring the future of artificial intelligence
Murtuza Kutub
Murtuza Kutub
Co-Founder, F22 Labs

Walk away with actionable insights on AI adoption.

Limited seats available!

Calendar
Saturday, 18 Oct 2025
10PM IST (60 mins)

4. Process Queries Using Claude & MCP Tools

async def process_query(self, query: str) -> str:
    messages = [{"role": "user", "content": query}]

    response = await self.session.list_tools()
    available_tools = [
        {"name": tool.name, "description": tool.description, "input_schema": tool.inputSchema} 
        for tool in response.tools
    ]

    response = self.anthropic.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1000,
        messages=messages,
        tools=available_tools
    )

    tool_results = []
    final_text = []

    for content in response.content:
        if content.type == "text":
            final_text.append(content.text)
        elif content.type == "tool_use":
            tool_name = content.name
            tool_args = content.input

            result = await self.session.call_tool(tool_name, tool_args)
            tool_results.append({"call": tool_name, "result": result})
            final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")

            messages.append({"role": "user", "content": result.content})

            response = self.anthropic.messages.create(
                model="claude-3-5-sonnet-20241022",
                max_tokens=1000,
                messages=messages,
            )
            final_text.append(response.content[0].text)

    return "\n".join(final_text)
  • Checks available tools, calls them dynamically, and interacts with Claude.

5. Interactive Chat Loop

async def chat_loop(self):
    print("\nMCP SSE Client Started!")
    print("Type your queries or 'quit' to exit.")

    while True:
        query = input("\nQuery: ").strip()
        if query.lower() == "quit":
            break
        response = await self.process_query(query)
        print("\n" + response)
  • Runs an interactive CLI where users can ask queries.

Running the MCP Client

Once the server is running, start the client:

python client.py 

Type queries like:

Query: Add 9 and 11

To exit, type:

Query: quit
Output Terminal

Conclusion

Congratulations! You've just unlocked a powerful way to build AI systems that communicate via server-sent events. By implementing MCP with SSE transport, you've gained the ability to create real-time, streaming connections between your AI models and external tools. This tutorial demonstrated: 

  • Setting up MCP Server & Client with Server-Sent Events (SSE)
  • Building AI-assisted tools
  • Using Claude & real-time tool execution

MCP is a powerful way to standardize, modularize, and secure AI interactions, and you can also explore how STDIO transport in MCP works if you’re interested in another transport mechanism.

Author-Kiruthika
Kiruthika

I'm an AI/ML engineer passionate about developing cutting-edge solutions. I specialize in machine learning techniques to solve complex problems and drive innovation through data-driven insights.

Share this article

Phone

Next for you

Codeium vs Copilot: A Comparative Guide in 2025 Cover

AI

Sep 30, 20259 min read

Codeium vs Copilot: A Comparative Guide in 2025

Are you still debating which AI coding assistant deserves a spot in your developer toolbox this year? Both Codeium and GitHub Copilot promise to supercharge productivity, but they approach coding differently.  GitHub made it known that developers using Copilot complete tasks up to 55% faster compared to coding alone. That’s impressive, but speed isn’t the only factor. Your choice depends on whether you are a solo developer building an MVP or part of a large enterprise team managing massive repo

Zed vs Cursor AI: The Ultimate 2025 Comparison Guide Cover

AI

Oct 14, 20257 min read

Zed vs Cursor AI: The Ultimate 2025 Comparison Guide

Coding has changed. A few years ago, AI lived in plugins and extensions. Today, editors like Zed and Cursor AI are built with AI at the core, reshaping how developers write, debug, and collaborate. But the real question in 2025 isn’t whether to use AI, it’s which editor makes the most sense for your workflow. According to Stack Overflow’s 2023 Developer Survey, 70% of developers are already using or planning to use AI tools in their workflow. With adoption accelerating, the choice of editor is

AWS CodeWhisperer vs Copilot: A Comparative Guide in 2025 Cover

AI

Sep 30, 20259 min read

AWS CodeWhisperer vs Copilot: A Comparative Guide in 2025

Tight deadlines. Security requirements. The pressure to deliver more with fewer resources. These are challenges every developer faces in 2025. Hence, the reason AI coding assistants are in such high demand.  Now, the question is, should your team rely on AWS CodeWhisperer or GitHub Copilot? This is more than a curiosity question. AI assistants are no longer simple autocomplete tools; they now understand project context, generate complete functions, and even flag security risks before code is de