Facebook iconMCP Practical Guide with SSE Transport - F22 Labs
Blogs/AI

MCP Practical Guide with SSE Transport

May 9, 20255 Min Read
Written by Kiruthika
Reviewed byAjay Patel
MCP Practical Guide with SSE Transport Hero

Integrating external data sources with AI models often requires complex and time-consuming custom coding. The Model Context Protocol (MCP) simplifies this by offering a standardised framework for seamless interaction.

In this guide, we will walk through building an MCP server and MCP client with Server-Sent Events (SSE), providing step-by-step instructions to set up and run both.

What is MCP?

MCP serves as a universal interface that allows AI tools to interact seamlessly with content repositories, business platforms, and development environments. By providing a standardized framework, MCP enhances the relevance and context-awareness of AI applications. 

It enables developers to build modular, secure, and flexible integrations without the need for separate connectors for each data source.

With MCP, developers can:

  • Standardize AI interactions with various tools and APIs.
  • Maintain context while switching between applications.
  • Improve modularity and composability in AI-powered systems.

Installation and Setup

Before running the MCP server and client, we need to install the required dependencies and set up environment variables.

1. Install Required Packages

Create virtual environment and run the following command to install the required dependencies:

python -m venv venv
source venv/bin/activate
pip install "mcp[cli]" anthropic python-dotenv requests
  • "mcp[cli]" – MCP client-server communication.
  • anthropic – API client for interacting with Claude models.
  • python-dotenv – To manage environment variables.
  • requests – To handle API requests.

2. Setting Up the .env File

Create a .env file in the project directory and add your API keys:

SERPER_API_KEY=your_serper_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here

This ensures sensitive credentials remain secure.

Building the MCP Server

Let's begin by creating an MCP server that provides two functionalities:

  1. A web search tool using the Serper API.
  2. A basic addition function.

Server Code Breakdown

1. Import Required Modules

from mcp.server.fastmcp import FastMCP
import requests
import os
from dotenv import load_dotenv

load_dotenv()

mcp = FastMCP()
  • FastMCP: Initialises the MCP server.
  • dotenv: Loads API keys from the .env file.

Partner with Us for Success

Experience seamless collaboration and exceptional results.

2. Web Search Tool Using Serper API

Configuring Tools in MCP

In MCP, each function wrapped with the @mcp.tool() decorator is considered a tool. This makes it easy to modularise functionalities. The description and input schema of the tool help the LLM decide which tool to use based on the user’s query.

  • The LLM inspects the tool descriptions and input schema to match the appropriate tool to the user’s query.
  • For instance, if a user asks “Add 5 and 10?", the LLM recognises this as a math operation and selects the add tool automatically.

For example:

API_KEY = os.getenv("SERPER_API_KEY")
API_URL = "https://google.serper.dev/search"

@mcp.tool()
def serper_search(query: str) -> dict:
    """Search the web using Serper API for user queries"""
    headers = {"X-API-KEY": API_KEY, "Content-Type": "application/json"}
    data = {"q": query}
    try:
        response = requests.post(API_URL, json=data, headers=headers)
        response.raise_for_status()
        result = response.json()
        print(f"Search result for '{query}': {result}")
        return result
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")
        return {"error": str(e)}
  • Takes user queries and fetches search results from the Serper API.
  • Handles API errors gracefully and returns structured results.

3. Basic Arithmetic Tool

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    print(f"Adding {a} and {b}")
    return a + b
  • Simple function to demonstrate adding tools to MCP.

4. Running the MCP Server with sse transport

  • SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
if __name__ == "__main__":
    print("MCP server is running on port 8000")
    mcp.run(transport="sse")
Output Terminal

Building the MCP Client with Server-Sent Events (SSE) transport

SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication. 

The client will:

  • Connect to the MCP server.
  • Use the Claude API to process natural language queries.
  • Identify and call appropriate tools dynamically.

Client Code Breakdown

1. Import Required Modules

Create a file named client.py and save the following code.

import asyncio
from typing import Optional
from contextlib import AsyncExitStack

from mcp import ClientSession
from mcp.client.sse import sse_client
from anthropic import Anthropic
from dotenv import load_dotenv

load_dotenv()
  • asyncio: Handles asynchronous tasks.
  • mcp.ClientSession: Manages client-server interactions.
  • anthropic.Anthropic: Enables LLM-based processing.

2. Define MCP Client Class

MCP_SERVER_URL = "http://localhost:8000/sse"

class MCPClient:
    def __init__(self):
        self.session: Optional[ClientSession] = None
        self.exit_stack = AsyncExitStack()
        self.anthropic = Anthropic()
  • Handles connection lifecycle for interacting with the MCP server.

3. Connect to the MCP Server

async def connect_to_server(self, url: str):
    """Connect to an MCP SSE server"""
    streams = await self.exit_stack.enter_async_context(sse_client(url=url))
    self.session = await self.exit_stack.enter_async_context(ClientSession(*streams))

    await self.session.initialize()

    response = await self.session.list_tools()
    tools = response.tools
    print("\nConnected to server with tools:", [tool.name for tool in tools]
  • Establishes connection and retrieves available tools.

Partner with Us for Success

Experience seamless collaboration and exceptional results.

4. Process Queries Using Claude & MCP Tools

async def process_query(self, query: str) -> str:
    messages = [{"role": "user", "content": query}]

    response = await self.session.list_tools()
    available_tools = [
        {"name": tool.name, "description": tool.description, "input_schema": tool.inputSchema} 
        for tool in response.tools
    ]

    response = self.anthropic.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1000,
        messages=messages,
        tools=available_tools
    )

    tool_results = []
    final_text = []

    for content in response.content:
        if content.type == "text":
            final_text.append(content.text)
        elif content.type == "tool_use":
            tool_name = content.name
            tool_args = content.input

            result = await self.session.call_tool(tool_name, tool_args)
            tool_results.append({"call": tool_name, "result": result})
            final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")

            messages.append({"role": "user", "content": result.content})

            response = self.anthropic.messages.create(
                model="claude-3-5-sonnet-20241022",
                max_tokens=1000,
                messages=messages,
            )
            final_text.append(response.content[0].text)

    return "\n".join(final_text)
  • Checks available tools, calls them dynamically, and interacts with Claude.

5. Interactive Chat Loop

async def chat_loop(self):
    print("\nMCP SSE Client Started!")
    print("Type your queries or 'quit' to exit.")

    while True:
        query = input("\nQuery: ").strip()
        if query.lower() == "quit":
            break
        response = await self.process_query(query)
        print("\n" + response)
  • Runs an interactive CLI where users can ask queries.

Running the MCP Client

Once the server is running, start the client:

python client.py 

Type queries like:

Query: Add 9 and 11

To exit, type:

Query: quit
Output Terminal

Conclusion

Congratulations! You've just unlocked a powerful way to build AI systems that communicate via server-sent events. By implementing MCP with SSE transport, you've gained the ability to create real-time, streaming connections between your AI models and external tools. This tutorial demonstrated: 

  • Setting up MCP Server & Client with Server-Sent Events (SSE)
  • Building AI-assisted tools
  • Using Claude & real-time tool execution

MCP is a powerful way to standardize, modularize, and secure AI interactions.

Author-Kiruthika
Kiruthika

I'm an AI/ML engineer passionate about developing cutting-edge solutions. I specialize in machine learning techniques to solve complex problems and drive innovation through data-driven insights.

Phone

Next for you

MCP or Function Calling: Everything You Need To Know Cover

AI

May 29, 20254 min read

MCP or Function Calling: Everything You Need To Know

Are you tired of your AI assistant giving you outdated information or saying "I can't access that" when you need real-time data? You've tried asking AI about your business data it doesn't know. You want it to check your emails or calendar it can't. You need it to look up information from your company systems which is impossible. Here's the problem: Most AI tools are disconnected from everything else you use. It's like having a really smart assistant who's locked in a separate room with no pho

Why the Instructor Beats OpenAI for Structured JSON Output Cover

AI

May 27, 202510 min read

Why the Instructor Beats OpenAI for Structured JSON Output

Integrating LLMs in our code and workflow is surely exciting, but it can get tiresome quickly as we need our outputs to follow proper structure/schema, and need to validate them along the way. This is where Instructor shines, let’s go through it one step at a time, see how it performs compared to OpenAI and much more. First, let’s understand what Structured Output means! Structured Output means getting output in a particular schema, widely used as JSON. Strictly adhering to JSON output enables

What are Temperature, Top_p, and Top_k in AI? Cover

AI

May 27, 20256 min read

What are Temperature, Top_p, and Top_k in AI?

LLMs work their wonders by crafting text that feels just like human writing, predicting what word comes next in a perfect flow. The real charm happens behind the curtain, where it's all about the game of probabilities and tokens!  Let’s control this magic by fine-tuning specific parameters: temperature, top_p (nucleus sampling), and top_k sampling, making us the magicians of this magic. It's going to be an exciting exploration! What is Temperature in AI?  Range: 0 to 2 (in practice) Tempera