What is Model Context Protocol (MCP)?

TL;DR

Anthropic's open standard (released late 2024) for AI-tool connectivity. Called the 'USB-C of AI' - Claude, Cursor, Cline, Windsurf, and ChatGPT now support it.

Model Context Protocol (MCP): Definition & Explanation

Model Context Protocol (MCP) is the open protocol Anthropic released in late 2024 for connecting AI agents to external systems. Often called the 'USB-C of AI', it standardizes how AI assistants integrate with tools like Slack, GitHub, Linear, Notion, Postgres, and Sentry. Where each framework (LangChain, LlamaIndex) used to require custom integration code, MCP lets you write one 'MCP Server' that works across all compatible clients - Claude Desktop, Cursor, Cline, Windsurf, Continue, Zed. Architecture: (1) MCP Host - the AI assistant (e.g., Claude); (2) MCP Server - a thin bridge to the external tool; (3) Transport - stdio or HTTP+SSE. Core primitives: (a) Tools (function calls), (b) Resources (read files etc.), (c) Prompts (reusable templates), (d) Sampling (LLM calls from inside a tool). Adoption exploded in 2025-2026: 1,000+ official and community servers (GitHub, Linear, Slack, Stripe, AWS, Gmail, etc.). Anthropic published a layered architecture in 2025 combining Claude Skills + Subagents + Hooks, becoming the de facto standard for enterprise AI operations. OpenAI shipped effectively MCP-compatible ChatGPT Connectors in late 2025. Reading and writing MCP servers is now table stakes for AI-native engineers in 2026.

Related AI Tools

Related Terms

AI Marketing Tools by Our Team