
Integrating external data sources with AI models often turns into custom glue code that is hard to maintain and difficult to standardize. I put this guide together to show how the Model Context Protocol (MCP) reduces that complexity by defining a consistent, transport-agnostic way for AI systems to interact with tools and services.
In this guide, we walk through building an MCP server and MCP client using Server-Sent Events (SSE), with clear, step-by-step instructions to set up and run both in a practical, real-world configuration.
MCP is a standardized protocol that allows AI tools to interact with content repositories, business platforms, and development environments through a unified interface. By defining a common framework for these interactions, MCP improves the relevance, reliability, and context-awareness of AI applications.
It enables developers to build modular, secure, and flexible integrations without creating separate connectors for each data source.
With MCP, developers can:
Before running the MCP server and client, we need to install the required dependencies and set up environment variables.
Create virtual environment and run the following command to install the required dependencies:
python -m venv venv
source venv/bin/activate
pip install "mcp[cli]" anthropic python-dotenv requests
Create a .env file in the project directory and add your API keys:
SERPER_API_KEY=your_serper_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
This ensures sensitive credentials remain secure.
Let's begin by creating an MCP server that provides two functionalities:
from mcp.server.fastmcp import FastMCP
import requests
import os
from dotenv import load_dotenv
load_dotenv()
mcp = FastMCP()
Walk away with actionable insights on AI adoption.
Limited seats available!
Configuring Tools in MCP
In MCP, each function wrapped with the @mcp.tool() decorator is considered a tool. This makes it easy to modularise functionalities. The description and input schema of the tool help the LLM decide which tool to use based on the user’s query.
For example:
API_KEY = os.getenv("SERPER_API_KEY")
API_URL = "https://google.serper.dev/search"
@mcp.tool()
def serper_search(query: str) -> dict:
"""Search the web using Serper API for user queries"""
headers = {"X-API-KEY": API_KEY, "Content-Type": "application/json"}
data = {"q": query}
try:
response = requests.post(API_URL, json=data, headers=headers)
response.raise_for_status()
result = response.json()
print(f"Search result for '{query}': {result}")
return result
except requests.exceptions.RequestException as e:
print(f"Error: {e}")
return {"error": str(e)}
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
print(f"Adding {a} and {b}")
return a + b
if __name__ == "__main__":
print("MCP server is running on port 8000")
mcp.run(transport="sse")
SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
The client will:
Create a file named client.py and save the following code.
import asyncio
from typing import Optional
from contextlib import AsyncExitStack
from mcp import ClientSession
from mcp.client.sse import sse_client
from anthropic import Anthropic
from dotenv import load_dotenv
load_dotenv()
MCP_SERVER_URL = "http://localhost:8000/sse"
class MCPClient:
def __init__(self):
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
self.anthropic = Anthropic()
Walk away with actionable insights on AI adoption.
Limited seats available!
async def connect_to_server(self, url: str):
"""Connect to an MCP SSE server"""
streams = await self.exit_stack.enter_async_context(sse_client(url=url))
self.session = await self.exit_stack.enter_async_context(ClientSession(*streams))
await self.session.initialize()
response = await self.session.list_tools()
tools = response.tools
print("\nConnected to server with tools:", [tool.name for tool in tools]
async def process_query(self, query: str) -> str:
messages = [{"role": "user", "content": query}]
response = await self.session.list_tools()
available_tools = [
{"name": tool.name, "description": tool.description, "input_schema": tool.inputSchema}
for tool in response.tools
]
response = self.anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1000,
messages=messages,
tools=available_tools
)
tool_results = []
final_text = []
for content in response.content:
if content.type == "text":
final_text.append(content.text)
elif content.type == "tool_use":
tool_name = content.name
tool_args = content.input
result = await self.session.call_tool(tool_name, tool_args)
tool_results.append({"call": tool_name, "result": result})
final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
messages.append({"role": "user", "content": result.content})
response = self.anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1000,
messages=messages,
)
final_text.append(response.content[0].text)
return "\n".join(final_text)
async def chat_loop(self):
print("\nMCP SSE Client Started!")
print("Type your queries or 'quit' to exit.")
while True:
query = input("\nQuery: ").strip()
if query.lower() == "quit":
break
response = await self.process_query(query)
print("\n" + response)
Once the server is running, start the client:
python client.py Type queries like:
Query: Add 9 and 11To exit, type:
Query: quitThis walkthrough showed how MCP with SSE transport can be used to build AI systems that stream data, invoke tools dynamically, and stay context-aware without tightly coupling logic across services. By separating tool execution from model reasoning, MCP makes these integrations easier to extend and safer to operate.
Whether you use SSE or explore alternatives like STDIO transport, the core value of MCP remains the same: a standardized, modular approach to connecting AI models with real-world systems in a maintainable way.
Walk away with actionable insights on AI adoption.
Limited seats available!