Unlocking AI: How the Model Context Protocol Transforms Tool Integration

Unlocking AI: How the Model Context Protocol Transforms Tool Integration

Norbert Aberor
14th January 2026

Home Insights Unlocking AI: How the Model Context Protocol Transforms Tool Integration

Imagine what the world would be like if AI models could simply talk to each other, without any complications or administrative overhead. Actually, you don’t have to imagine, because they can already, although not very well yet.

All you need to do is take two smartphones, each with a different current AI model. Trigger one to say “hello,” and the other will answer with a polite “What can I do for you today?” In principle, that’s all you need to set up long, deep and poignant relationships between the two AIs. In practice, you’ll be lucky to get more than platitudes on autorepeat, though there’s a distant chance you’ll discover hidden secrets of the universe.

So while this setup might be an amusing party piece, it is not exactly reliable. And nor is it a strictly accurate analogy, because what MCP allows AI models to do is set up conversations with data sources, like AWS, a SQL database and Jira and Slack. It’s not explicitly for AI models to talk about the weather together. It is not yet a universal language for an AI to instantly understand what sources and tools it has access to, regardless of who built them.

But it is a solid and credible step in this direction. Let’s look at how it works.

First, an AI agent needs to identify precisely who it is talking to. Not all data sources have the same capabilities, so we need something general enough to encompass a wide range of possibilities and that will also track changes in real time.

A good place to start might be for an AI agent to ask the data source what it can do. There’s a formal name for this: Discovery. It’s not new: Bluetooth has had it from day one.

Discovery lets two broadly compatible systems introduce themselves to each other, swapping details about their capabilities. Before AI, the only way to make this work was to restrict the scope of the discovery process to capabilities that were “expected” by both parties. This is to avoid a situation where Bluetooth - originally used almost exclusively for hands-free earpieces and headphones - could swap parameters like audio sample rate, bit depth, compression type and so on. What you definitely wouldn’t get is a conversation like, “What kind of earpiece are you”? “Oh, I’m actually a washing machine”.

With AI, the introduction might be more like “What’s your speciality: 19th Century Austrian Philosophers or Motorcycle Maintenance”?

Once the discovery process is completed, the two parties can converse more fluidly, with greater accuracy and less time wasted on ambiguity and error correction.

MCP (Model Context Protocol) is a promising new standard for establishing transactions between diverse software entities. It uses a well-known text-based protocol called JSON, which is versatile, easy to code for, and flexible.

MPC, which is sometimes called “The USB-C of AI,” caters for real-time interactions and data flows: a session could be used to respond to a continuously updated data stream. MPC’s capabilities are rapidly evolving, and it may not even become the dominant method, but it is spearheading the roll-out of Agentic AI and unattended complex data automation. Whatever the outcome, it is likely to be transformative.

Want to know more? Here’s our more technical deep dive.

The Model Context Protocol (MCP) is a simple standard that helps AI apps use tools. Think of it like a universal plug. If both sides follow the same standard, they can connect, even if they were built by different teams.

With MCP, an AI client can ask a server, what can you do, then call those capabilities in a structured way. The client does not need to know how the server works internally. It only needs to speak the protocol.

This is useful because it keeps things flexible. You can change the model, change the client app, or change the tools behind the server, and as long as they still speak MCP, they keep working together. That is what makes it possible for independent systems to interoperate without knowing much about each other.

This article walks you through the basics of setting up an MCP server, adding tools, prompts, and files, and connecting clients like Claude Desktop and custom HTTP clients to it.

Before diving into the actual server, let's make sure you have the basics ready.

  1. Python 3.10+
  2. modelcontextprotocol SDK
  3. A working virtual environment (recommended)
  4. Basic knowledge of http

To start with create a virtual environment and install modelcontextprotocol

pip install modelcontextprotocol
pip install httpx

MCP Server setup Let's start with a minimal server with a single tool. This will help you understand the basic server structure. Create a file server.py with the code below. server.py

import httpx
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("MCP Server")

@mcp.tool()
async def fetch_weather(lat: str, long: str) -> str:
	# doc string

	"""Fetch current weather forecast data for a specific location using Open-Meteo API.

	Args:
	lat (str): Latitude coordinate of the location
	long (str): Longitude coordinate of the location

	Returns:
	str: JSON string containing weather forecast data including temperature,
	precipitation, wind speed, and other meteorological parameters

	Note:
	Uses the free Open-Meteo API (<https://open-meteo.com>) for weather data
"""

async with httpx.AsyncClient() as client:
	response = await client.get(f"<https://api.open-meteo.com/v1/forecast?latitude={lat}&longitude={long}&current=temperature_2m,wind_speed_10m&hourly=temperature_2m,relative_humidity_2m,wind_speed_10m>")
	return response.text

	if __name__ == "__main__":
		# Uses Standard Input Output
		mcp.run()

In our code above, we created a simple MCP server that exposes a single weather fetching tool. Let's break down what's happening:

  1. We import the required libraries:
  • httpx for making HTTP requests
  1. We create a FastMCP server instance named "MCP Server"
  2. We define an async function fetch_weather decorated with @mcp.tool() that:
  • Takes latitude and longitude as string parameters
  • Makes an API call to Open-Meteo's weather API
  • Returns the weather data as a JSON string
  • Includes comprehensive documentation via docstring
  1. The API call fetches:
  • Current temperature and wind speed
  • Hourly forecasts for temperature, humidity, and wind speed
  1. Finally, we run the server using mcp.run()

This creates a minimal yet functional MCP server that responds to weather data requests for any location specified by coordinates. We can run the server and check if there are no errors with the command below: python server.py We've set up a minimal MCP server running. Now let's connect clients to interact with it. We can do this through MCP clients. In this tutorial we will try 2 approaches,

Claude Desktop

Steps to Connect MCP with Claude Desktop

  1. Install Claude Desktop and run the app
  2. Open up the Claude menu on your computer and select “Settings…”
  3. Add the Filesystem MCP Server:

Click on “Developer” in the left-hand bar of the Settings pane, and then click on “Edit Config” This will create a configuration file at:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json if you don’t already have one, and will display the file in your file system

  1. Configure Your MCP Server: Add your MCP server configuration to the JSON file. Here's the basic structure:
{
	"mcpServers": {
		"server-name": {
			"command": "path-to-server-executable",
			"args": ["any", "arguments"],
			"env": {
				"ENVIRONMENT_VARIABLE": "value"
			}
		}
	}
}

Here's an example config file below:

{
	"mcpServers": {
		"filesystem": {
			"command": "npx",
			"args": [
				"-y",
				"@modelcontextprotocol/server-filesystem",
				"/path/to/allowed/directory"
			]
		}
	}
}
  1. Restart Claude Desktop Close and reopen Claude Desktop completely for the configuration to take effect. 6.Verify Connection Start a new conversation in Claude Desktop. If the MCP server is connected properly, Claude will have access to the tools and resources provided by that server.

Now ask "What is the weather in Accra today" and this should fetch the current weather in our location using the weather tool in our MCP server and return to you the appropriate information The client will identify that our tool needs longitude and latitude so the client will pass these to our server

Files and prompts In addition to tools, MCP servers can work with files and prompts to enhance their capabilities.

Files let you provide documents, code, or other content the model can reference and work with. Prompts help shape the model's behaviour and responses by providing system-level instructions or personality traits. Together, these features enable richer context-aware interactions beyond just tool execution.

Add Files Files allow models to reference large contexts like code, documents, or datasets. One use case for files is this Code Review Context

from modelcontextprotocol.server import MCPServer, File

# ... other code

mcp.files.append(File(name="main.py", content="""
def buggy_function():
	return 1 / 0 # Division by zero
"""))

This file can now be referenced by tools that analyse or debug code.

Add Prompts Prompts can shape the personality or instructions of the assistant. A use-case for files this is Assistant Personality

from modelcontextprotocol.server import MCPServer, Prompt

# ... other code

mcp.prompts.append(Prompt(
	role="system",
	content="You are an empathetic and precise code reviewer who always explains suggestions with examples."
))

Explanation The files example demonstrates how to add code context to your MCP server:

mcp.files.append(File(name="main.py", content="""
	def buggy_function():
		return 1 / 0 # Division by zero
"""))
  1. We use mcp.files.append() to add a new file to the server's context
  2. The File class takes two main parameters:
  • name: The filename that will be used to reference this content
  • content: The actual content of the file as a string
  1. In this example, we're adding a Python file containing a buggy function
  2. This file can now be referenced by any tools or prompts that need to analyse or debug code

The prompt example shows how to shape the assistant's behaviour:

mcp.prompts.append(Prompt(
	role="system",
	content="You are an empathetic and precise code reviewer who always explains suggestions with examples."
))

Let's analyse the prompt structure:

  1. We use mcp.prompts.append() to add a new prompt to the server
  2. The Prompt class takes two key parameters:
  • role: Specifies the role of the prompt (typically "system" for system-level instructions)
  • content: The actual prompt text that defines the assistant's behaviour
  1. In this example, we're creating a code review assistant that:
  • Is empathetic in its communication
  • Provides precise feedback
  • Always includes examples with its suggestions

These prompts help guide the model's responses and ensure consistent behaviour across interactions. You can add multiple prompts to create complex personalities or specialised assistants.

Custom HTTP SSE Client In this section of the tutorial we want to create our own MCP client that interacts with the MCP server we created earlier.

We can connect via HTTP + Server-Sent Events (SSE):

For this to work, we first have to update the run method in our MCP server so we can serve it with SSE. Update the section as below and run the server

# Uses SSE
mcp.run(transport="sse")

Now we can continue with our client code with this basic example using httpx, sseclient and pydanticai Agent

import asyncio
from dotenv import load_dotenv
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerSSE

# create a .env file set OPENAI_API_KEY = "YOUR-API-KEY", replace YOUR-API-KEY with your openai api key

load_dotenv(verbose=True)
mcp_http_server = MCPServerSSE(
	url="<http://127.0.0.1:8000/sse>",
)
client_agent = Agent(
	"openai:gpt-4o-mini",
	mcp_servers=[mcp_http_server],
	system_prompt="You are a helpful customer service AI assistant"
)

async def main_client():
	async with client_agent.run_mcp_servers():
		prompt = "What is the weather in Accra today"
		result = await client_agent.run(prompt)
		print("Final response: ")
		print(result.output)

if __name__ == "__main__":
	asyncio.run(main_client())

In the code above we created a simple client that connects to an MCP server using HTTP and Server-Sent Events (SSE). Here's a breakdown of what each part does:

**Imports and Setup: **_Italic_We imported necessary modules, loaded environment variables, and set up the MCP HTTP server endpoint using MCPServerSSE.

**Agent Initialisation: **_Italic_An Agent is created with a specified model (openai:gpt-4o-mini), the MCP server, and a system prompt to guide the assistant's behaviour.

Client Function:Italic The "main_client" async function manages the connection to the MCP server, sends a prompt ("What is the weather in Accra today"), and prints the final response from the AI. _Running the Client: _BoldThe script runs the async client function if executed as the main module. This mimics how clients like Claude interact, sending user input and receiving streamed results via SSE.

In conclusion

MCP Servers make it easy to provide AI agents with powerful tools, deep context, and structured interactions. With practice, you can use MCP to build things like dev assistants, research copilots, or code reviewers. It is a powerful technique, and the combination of tools, files, and prompts can give you more control and boost your productivity.

If you've found this interesting, you can explore the open-source ecosystem:

MCP SDK on GitHub Claude Desktop Tool Templates and Examples

Share Article

Insights.

Building Proactive Uptime Alerts Using CloudWatch
Building Proactive Uptime Alerts Using CloudWatch

Discover More
Moving to a better address
Moving to a better address

Discover More
From MVP to Market Fit
From MVP to Market Fit

Discover More