Model Context Protocol
A standardized, open protocol for connecting LLMs to external tools and data sources. Tool providers implement an MCP server; agent frameworks implement an MCP client. The agent can dynamically discover available tools at runtime, invoke them with standardized calling conventions, and receive structured results.
Often described as "USB-C for AI tools" — one protocol, any tool, any agent.
Structure
The client connects to multiple MCP servers. Each server exposes a set of tools with schemas. The agent discovers available tools at runtime — it doesn't need to know them in advance.
How It Works
- Connect — MCP client connects to one or more MCP servers (local or remote)
- Discover — client requests the list of available tools from each server
- Expose — discovered tools are presented to the agent as callable functions
- Invoke — agent calls a tool; client routes the request to the appropriate server
- Return — server executes the operation and returns structured results
MCP servers can expose:
- Tools — callable functions with typed parameters and return values
- Resources — data sources the agent can read (files, database records, API responses)
- Prompts — pre-built prompt templates for common tasks
Key Characteristics
- Dynamic discovery — agent finds tools at runtime, not hardcoded at design time
- Standardized — one protocol across all providers (Anthropic, OpenAI, Google, LangChain)
- Composable — connect multiple servers to give the agent a wide toolkit
- Ecosystem — 1,000+ MCP servers available as of 2025
- Overhead — protocol negotiation and server management add complexity
When to Use
- You want to connect an agent to multiple external services without custom integration code
- Tools should be discoverable at runtime, not hardcoded
- You're building an agent that needs to work with different tools depending on the user's setup
- The MCP ecosystem already has servers for your target services
- You want to standardize tool integration across your agent fleet