Model Context Protocol
MCPAn open protocol for connecting LLM applications to tools, data sources, and prompts in a standard way — like USB for AI tool integrations.
In one line
An open protocol for connecting LLM applications to tools, data sources, and prompts in a standard way — like USB for AI tool integrations.
What it actually means
MCP defines a JSON-RPC protocol between MCP clients (an LLM-powered host like Claude Desktop, an IDE, or your own agent runtime) and MCP servers (small processes that expose tools, resources, and prompt templates). A server might wrap a database, a filesystem, a SaaS API, or anything else. The client discovers what each server offers, hands the available tools to the LLM, and routes tool calls to the right server. Auth, capability negotiation, and streaming are all part of the spec.
Why it matters
Before MCP, every agent framework reinvented its own tool plumbing, and integrating a new system meant writing glue inside every framework. MCP turns it into “write one server, every MCP-aware host can use it”. For practitioners, it’s the easiest way to give an agent access to your own systems without coupling it to a specific SDK.
Example
claude-desktop ──MCP──▶ filesystem-server (read/write files in ~/Documents)
──MCP──▶ postgres-server (query a read-only replica)
──MCP──▶ github-server (list issues, open PRs)
You’ll hear it when
- Setting up Claude Desktop, Cursor, or another MCP-aware client.
- Wrapping an internal API for use by an agent.
- Comparing MCP to OpenAI tool calling or LangChain tools.
- Designing capability boundaries and auth for an agent.
- Reading the AI engineer roadmap.