How to Write Your MCP Server in Python
- Adrian Araya
- May 12
- 7 min read

The Model Context Protocol (MCP) is an open standard proposed by Anthropic for connecting AI models (like large language models) to external tools and data. Think of MCP like a USB-C port for AI applications. It provides a universal way to plug in various data sources and functionalities into your AI. In this post, we’ll explain what MCP is, then walk through building a simple MCP server (a TODO list manager) in Python, and finally show how to connect it to an MCP client (using Claude for Desktop as an example).
What is MCP?
MCP follows a client-server architecture where an AI application can connect to multiple backend servers. In MCP terminology:

Host: The application that hosts the language model, like your IDE or chat window.
MCP Client: A component (usually inside the AI app or “host”) that maintains a 1:1 connection to an MCP server. It acts as the bridge between the AI model and the server.
MCP Server: A lightweight program that exposes specific capabilities (tools, data, or prompts) through the standardized MCP interface. Each server typically provides one domain of functionality (e.g. a weather service, a todo list, a file system interface, etc.).
Transport: The communication layer between client and server, such as local stdio or HTTP, allowing integration of both local and remote tools without direct model access.
MCP provides a unified, secure interface for connecting AI models to external tools, making them easy to integrate and reuse. In this blog we are going to write a ToDo list MCP server, to allow an LLM to manage our pending tasks. We are going to use Claude Desktop as our MCP client. Although it is out of the scope of this blog, you can build a custom MCP client as well.
> What is the difference between function calling and MCP? Function calling is just one component in the MCP standard. MCP defines, not only how the functions should be defined, but many other aspects such as streaming, error handling, signaling, etc…
How to Write Your MCP Server in Python
Now that we know the basics, let’s build our own MCP server! As an example, we’ll create a simple TODO list manager that the AI can interact with in conversation. Our server will keep track of tasks and expose three tools to the AI model:
list_items: list all pending TODO items.
new_item: add a new task to the TODO list.
complete_item: mark a task as completed (remove it from the list).
When connected to a language model, these tools will let a user manage their to-do list through natural conversation. For instance, imagine the following dialog between a user and an AI assistant, with our MCP server handling the TODO list in the background:
User: Hey! What do I have on my todo list today?
Assistant: Let me check (calls list_items). You only have "Buy groceries" pending.
User: Oh right. Add a reminder to book a dentist appointment.
Assistant: Sure thing! (calls new_item). I've added that to your TODO list.
User: Thanks. I already bought the groceries—can you mark that as done?
Assistant: (calls complete_item) Got it! I've marked "Buy groceries" as completed.
User: Perfect. What's left now?
Assistant: (calls list_items) Just "Book a dentist appointment."
In this conversation, the AI uses our MCP server’s tools to fetch the list, add a new item, and complete an item. Next, we’ll see how to implement this server in Python.
Example: MCP TODO List Server
In this example, we'll build a simple TODO list server using FastMCP, the official Python SDK for MCP. FastMCP simplifies the protocol by allowing you to define tools as regular Python functions, using type hints and docstrings to generate the necessary MCP metadata automatically.
Set Up Your Environment
To keep dependencies isolated and avoid conflicts, it's recommended to use a Python virtual environment, such as one managed with pyenv.
Create a new directory for our project:
In case you use pyenv run and create your virtual environment:
Install mcp via pip:
With the environment ready, we can now implement our TODO list MCP server.
Writing the Server
With the environment ready, let’s create our server. Create a new file todo_server.py and add the following code:
Let’s break down this code:
We import FastMCP from the mcp.server.fastmcp module and create an mcp server instance with the name "todo". The name is used by clients to identify this server.
We define a simple list tasks to hold our to-do items in memory. For demonstration, we pre-populate it with one task (so that list_items has something to show initially).
We create three functions list_items, new_item, and complete_item. Each is decorated with @mcp.tool(), which tells FastMCP that this function should be exposed as a tool to the AI.
Docstrings and type hints: Notice each function has a clear docstring explaining what it does, its arguments, and return value. For example, new_item’s docstring describes the purpose and its title parameter. We also use type hints (title: str and the return type -> dict). These are not just for our own documentation – FastMCP reads them to auto-generate the tool’s specification and description. In other words, the protocol will know the function’s name, what it expects, and what it does, without extra coding. This metadata helps the AI decide when and how to call the function.
The functions implement simple logic:
list_items returns the current list of pending tasks as a list of dictionaries, each containing an id and a task item.
new_item adds a new task with an auto-incremented id and returns the newly created task.
complete_item searches for a task by its id and removes it from the list if found, returning a status message indicating success or failure.
Finally, we run the server with mcp.run(transport="stdio"). This starts the server and listens for a client connection using the standard I/O transport (meaning the server will communicate via its stdin/stdout).
That’s the entire MCP server! It’s a self-contained program that, when running, will await requests from an MCP client. Thanks to our descriptive docstrings and clear function signatures, any connected language model will know these tools are available and how to use them in conversation.
Connecting to the MCP Server with an MCP Client (Claude for Desktop)
Once our todo server is running, we need an MCP client to use it. One convenient option is Claude for Desktop, Anthropic’s desktop chat application for Claude. Claude Desktop includes built-in support for MCP integrations, allowing you to connect local MCP servers like ours and chat with Claude using those new tools.
Using Claude for Desktop: First, ensure you have Claude for Desktop installed (available for macOS and Windows from Anthropic’s website) and updated to the latest version. Unfortunately, Claude Desktop is not officially available on Linux at the time of writing. (Linux users can either build their own MCP client or try an unofficial community port like the Claude Desktop for Debian project.)
To connect our MCP server, we’ll need to let Claude Desktop know about it. This involves adding an entry to Claude Desktop’s configuration file so it can launch and interface with the server:
Locate the config file: Open the file claude_desktop_config.json in the Claude app’s data directory. Create the file if it doesn’t exist.
On macOS the path is:
~/Library/Application Support/Claude/claude_desktop_config.json
On Windows the path is:
$env:AppData\Claude\claude_desktop_config.json
Add your server entry: Inside this JSON file, there should be a section for "mcpServers". You need to add a new entry with the name of your server (in our case "todo") and instructions on how to start it. For our example, we can tell Claude to run the server with the Python interpreter:
⚠️ Important: Make sure the command points to the exact Python interpreter where you installed mcp using pip. If Claude Desktop launches a different interpreter without the mcp package, it will fail to connect to the server. If you are using pyenv you can find the correct path using pyenv which python inside your virtual environment. Also make sure to use the full absolute path to the script.
Save and restart Claude: After adding the config, save the file and restart Claude for Desktop. When it reopens, it will see the configured MCP server. Claude’s interface will now load the “todo” server on startup.
“todo” tool available in the Claude Desktop interface.
With Claude for Desktop up and our server configured, you can start a conversation and use the TODO list tools naturally. When you ask “What’s on my todo list?”, Claude will recognize it should call the list_items tool on your server, get the result, and reply with the list of tasks. If you say “Add a reminder to book a dentist appointment.” Claude will call new_item(title="book a dentist appointment") behind the scenes, then confirm the addition. All of this happens seamlessly in the chat flow.

Tip: In Claude for Desktop, you may see indications or logs of tool calls (for transparency, it might show that it’s calling a function). The first time a tool is used, Claude will typically ask for user approval. This is a safety feature, it might prompt “Allow Claude to use tool new_item?” and you can approve it. After approval, the tool executes and the conversation continues with the results.

Congratulations! 🎉 You’ve written an MCP server and connected it to a real AI assistant. You can now chat with Claude and manage your to-do list through natural language. This basic example can be expanded or modified for other purposes: you could build MCP servers for reading files, querying databases, controlling IoT devices, or anything else you can code. By following MCP’s standards, your tools become plug-and-play for any AI client that speaks the protocol.
🚀 Ready to Extend Your AI with Custom Tools? Let’s Talk
If you're building AI-powered applications and want to integrate custom tools, workflows, or data into your assistant experience, we’d love to hear from you. MCP makes it easy to connect language models to real-world systems through secure, flexible interfaces.
Whether you’re experimenting with internal automations or scaling production-ready AI integrations, we can help you design and deploy an MCP-based architecture tailored to your needs.
Reach out at support@ridgerun.ai — let’s build something smarter, together.
Comments