Back to Notes

MCP — Model Context Protocol

What Is MCP?

Model Context Protocol (MCP) is an open standard by Anthropic that defines how applications expose tools, resources, and prompts to LLMs in a unified way.

Think of MCP as USB-C for AI — one standard connector so any LLM can plug into any tool/data source.

Docs: https://modelcontextprotocol.io/introduction


Why It Matters

Without MCP: every AI app builds custom integrations per tool (Slack, GitHub, DB, etc.) With MCP: tools expose a standard interface → any MCP-compatible LLM can use them


Key Concepts

ConceptDescription
MCP ServerA process that exposes tools/resources/prompts over the protocol
MCP ClientThe LLM application that connects to servers (Claude Desktop, Claude Code, custom apps)
ToolsFunctions the LLM can call (e.g., search_github, query_db)
ResourcesData the LLM can read (e.g., file contents, API responses)
PromptsReusable prompt templates exposed by the server

Architecture

LLM App (MCP Client)
    ↕ MCP Protocol (JSON-RPC over stdio / HTTP)
MCP Server
    ↕ Native API
External Tool / Data Source (GitHub, Slack, DB, filesystem...)

Using MCP with LangChain

from langchain_mcp_adapters import MCPClient

# Connect to an MCP server
client = MCPClient("path/to/mcp/server")

# Get tools from the server — use them directly in LangChain
tools = client.get_tools()

# Use in an agent
from langchain.agents import create_agent
agent = create_agent(model="claude-sonnet-4-5", tools=tools)

Lib: https://github.com/langchain-ai/langchain-mcp-adapters


Common MCP Servers

ServerWhat it exposes
@anthropic/mcp-filesystemRead/write local files
@anthropic/mcp-githubGitHub repos, PRs, issues
@anthropic/mcp-brave-searchWeb search
@anthropic/mcp-postgresPostgreSQL queries
Custom serversAny API you wrap

Notes

<!-- Add as you build MCP integrations -->