Based on a tutorial by Eduardo from Dialypo
Have you been hearing about MCP (Model Context Protocol) and struggling to understand what all the hype is about? You’re not alone. With so many AI-related acronyms flying around, it can be challenging to grasp what MCP actually does and why it matters.
In this comprehensive guide, I’ll break down this powerful protocol that’s changing how AI models interact with external tools and services. Whether you’re a cloud developer, AI enthusiast, or just curious about the technology, this guide will help you understand MCP from the ground up.
Quick Navigation
- What is the Model Context Protocol? (00:00-15:30)
- MCP in Action: Practical Examples (15:31-32:45)
- Understanding MCP.Run and WebAssembly (32:46-45:30)
- Building Your Own Serlet (45:31-58:00)
- Integrating with Kubernetes (58:01-1:12:30)
- Creating Automated Tasks with MCP (1:12:31-1:25:45)
- Building MCP Client Applications (1:25:46-End)
What is the Model Context Protocol? (00:00-15:30)
At its core, the Model Context Protocol (MCP) is a simple but powerful protocol that sits on top of the function calling capabilities of large language models (LLMs). To understand MCP, we first need to understand how these models actually work.
Key Points:
- LLMs are essentially text generators – they predict and complete text based on the context given to them
- Modern LLMs output text in structured formats like JSON to make interaction more organized
- LLMs can’t directly interact with external systems – they can only generate text
- Function/tool calling is a way for LLMs to request actions by generating specifically formatted text
- MCP is a JSON RPC protocol that standardizes how LLMs can discover and call external functions
Eduardo explains that MCP is conceptually similar to the Language Server Protocol (LSP) used in code editors like VS Code. Just as LSP allows text editors to interact with language-specific servers for features like code completion and definition jumping, MCP enables AI models to interact with servers that expose functionality to the outside world.
My Take:
The beauty of MCP is its simplicity – it’s essentially a standardized way for AI models to ask “what functions do you have available?” and then call them in a structured manner. This deceptively simple concept opens up enormous possibilities for extending AI capabilities while maintaining a clean separation of concerns.
MCP in Action: Practical Examples (15:31-32:45)
To demonstrate MCP’s capabilities, Eduardo walks through several examples using Claude (an AI assistant from Anthropic) and various MCP servers that provide different functionalities.
Key Points:
- With a fetch MCP server, Claude can access and read content from websites
- Adding a Brave search server allows Claude to search the web for current information
- Google Maps server enables Claude to find locations, get directions, and find nearby places
- Without these tools, Claude (like other LLMs) is limited to the information it was trained on
- Each tool must ask for permission before executing, providing a security layer
Throughout the demonstrations, we see how Claude transitions from being unable to answer a question about current events to being able to search for information, retrieve web content, find restaurant recommendations, and even provide walking directions between locations.
My Take:
These examples clearly illustrate the transformative power of MCP. What’s particularly impressive is how seamlessly the AI assistant integrates these external capabilities into its conversational flow. The user doesn’t need to know the underlying mechanisms – they simply ask questions naturally, and the AI figures out which tools to use.
Understanding MCP.Run and WebAssembly (32:46-45:30)
After exploring basic MCP implementations, Eduardo introduces MCP.Run, a platform created by Dialypo for hosting and managing MCP servers (called “serlets”) more securely and efficiently.
Key Points:
- Traditional MCP server implementations have several drawbacks:
- API keys stored in plaintext configuration files
- Tools have unrestricted access to your machine
- Docker solutions are resource-heavy
- MCP.Run addresses these issues by:
- Using WebAssembly for sandboxing serlets
- Securely encrypting and storing API keys
- Allowing fine-grained permissions for network and file system access
- Supporting profiles to manage different sets of tools
WebAssembly (WASM) plays a crucial role in MCP.Run’s architecture. Unlike containers, WASM is a portable binary format that can run across platforms without virtualization overhead. It provides natural sandboxing and only gives access to resources explicitly granted.
# Example of configuring MCP.Run in Claude Desktop
mcpx@latest
My Take:
The security concerns Eduardo addresses are critical for widespread MCP adoption. By leveraging WebAssembly, MCP.Run provides a clever solution that balances functionality, security, and performance. The ability to have fine-grained control over what each serlet can access is particularly valuable in production environments.
Building Your Own Serlet (45:31-58:00)
The tutorial demonstrates how to create a custom serlet for MCP.Run using the XTP tool (a tool for building and managing WebAssembly plugins).
Key Points:
- Creating a new serlet involves registering it on the MCP.Run platform
- The XTP command-line tool handles building and publishing serlets
- Serlets must implement two key functions:
- describe() – returns schemas for all available functions
- call() – implements the actual functionality for each tool
- Once published, serlets can be immediately installed in any profile
Eduardo walks through creating a simple “Hello World” serlet that implements a “greet” function, then demonstrates how Claude can interact with it once it’s installed.
# Creating a new serlet with XTP
xtp plugin create hello-world
# Building and publishing the serlet
xtp plugin build
xtp plugin push
My Take:
The serlet creation process is surprisingly straightforward, especially considering the power it unlocks. While the demo shows a simple example, the same pattern could be used to create serlets that interact with databases, APIs, or complex business systems. This extensibility is what makes MCP truly powerful.
Integrating with Kubernetes (58:01-1:12:30)
One of the most powerful applications of MCP is cloud native integration. Eduardo demonstrates how to create and use serlets that interact with Kubernetes clusters and deployed applications.
Key Points:
- Special Kubernetes serlets allow querying deployments, pods, and services
- Custom application serlets can interact with APIs running in the cluster
- The demo shows Claude diagnosing a failing deployment and suggesting fixes
- Currently uses kubectl proxy for cluster access, with certificate-based auth coming soon
- Serlet permissions can be limited to specific base URLs for security
The demonstration includes deploying a Star Wars API service, interacting with it through a custom serlet, deliberately breaking the deployment, and then having Claude diagnose and fix the issue – all through natural language interaction.
My Take:
The Kubernetes integration showcases the potential for AI-powered DevOps assistance. Imagine being able to ask in plain English about the status of your deployments, diagnose issues, or even automate remediation – without needing to remember complex kubectl syntax. This could dramatically change how we manage cloud infrastructure.
Creating Automated Tasks with MCP (1:12:31-1:25:45)
Beyond interactive use, MCP.Run supports automated tasks that can be triggered by schedules or webhooks, enabling integration with external systems.
Key Points:
- Tasks run serlets and LLMs with defined prompts in the background
- They can be scheduled or triggered via webhook URLs
- Demo shows creating a news summarization task
- Webhook integration with GitHub automatically summarizes pull requests
- Telegram bot integration demonstrates conversational interface with MCP tools
Eduardo demonstrates a particularly useful example: a GitHub integration that automatically analyzes pull requests and provides summaries as comments, even asking clarifying questions to the PR author when details are missing.
My Take:
The automated tasks feature takes MCP beyond just interactive chat interfaces and into the realm of AI agents that can work autonomously. The GitHub PR summary example is brilliant – it solves a real pain point (PRs with insufficient descriptions) while demonstrating how AI can be embedded into existing workflows.
Building MCP Client Applications (1:25:46-End)
In the final section, Eduardo explores how developers can integrate MCP into their own applications using various client libraries.
Key Points:
- MCPX libraries are available for different languages (Java, TypeScript, Python)
- Spring AI demo shows integrating MCP with Java applications
- Android demo with Gemini demonstrates mobile integration
- Multi-modal capabilities allow combining text and image inputs
- Pure Java implementation (Chikory) enables running on any Java platform
The most impressive demonstration shows an Android application using Gemini (Google’s multimodal LLM) with MCP serlets. The app can analyze images (recognizing the Eiffel Tower and Mona Lisa), then use the Maps serlet to provide directions between them.
// Example of using MCPX4J with Spring AI
@Configuration
public class McpxConfig {
@Bean
public McpxForJ mcpxForJ() {
return new McpxForJ(
McpxConfig.builder()
.apiKey("your-mcp-run-api-key")
.profileId("your-profile-id")
.build()
);
}
}
My Take:
The client libraries make MCP accessible to developers beyond just chat interfaces. This opens the door to embedding AI capabilities with tool access into practically any application – from mobile apps to enterprise systems. The multi-modal capabilities are particularly exciting, as they allow combining visual understanding with tool execution.
This article summarizes the excellent tutorial created by Eduardo from Dialypo. If you found this summary helpful, please support the creator by watching the full video and subscribing to the Cube Simplify channel.