Mastering MCP (Model Context Protocol): From Concept to Real-World Usage

By seokchol hong

Introduction

In the past, if you asked Claude, "Can you check my GitHub issues?" the answer was usually, "Sorry, I can't access that directly. Please copy and paste it." Working with AI meant constantly moving data by hand.

MCP (Model Context Protocol) is the open protocol Anthropic introduced in 2024 to solve that problem. It lets AI models connect to external tools, data, and services in a standardized way. Just as USB-C lets many devices use one port, MCP lets AI access many external systems through one protocol.

Within a year of launch, OpenAI adopted it and the protocol moved to the Linux Foundation. MCP is no longer just one company's experiment. It is becoming an industry standard. This article covers the concept, practical usage, the MCP vs. CLI debate, recommended servers, and token optimization.


1. What MCP Is

MCP is a standard communication protocol between AI models and external systems. It follows a client-server structure:

  • MCP client: runs on the AI tool side, such as Claude Desktop, Claude Code, or Cursor
  • MCP server: acts as the intermediary layer that talks to external systems such as GitHub, Figma, databases, or browsers

If the AI decides to "fetch the issue list from GitHub," the MCP client sends a request to the GitHub MCP server, which then calls the GitHub API and returns the result. Developers no longer need to build custom integrations service by service. Installing one MCP server is often enough to give the AI direct access.

Before MCP vs. After MCP

CategoryBefore MCPAfter MCP
External data accessUser copies and pastesAI queries directly
Tool integrationCustom implementation per serviceUnified through a standard protocol
EcosystemFragmentedShared server ecosystem
AuthenticationHandled separately everywhereManaged by the MCP server

MCP Timeline

  • Late 2024: Anthropic releases MCP and first supports it in Claude Desktop
  • Mid-2025: major AI tools such as Claude Code, Cursor, and Windsurf adopt it
  • December 2025: OpenAI announces MCP adoption for ChatGPT integration
  • Late 2025: the protocol moves to the Linux Foundation, beginning cross-industry standardization
  • 2026: rapid growth of the third-party MCP server ecosystem

2. "MCP Is Dead": The MCP vs. CLI Debate

On February 28, 2026, Eric Holmes published a provocative post: "MCP is dead. Long live the CLI." At a time when MCP was gaining momentum as the standard interface for AI agents, the argument drew real attention.

The CLI Argument

The core idea was simple: LLMs are already good at using CLIs, so why add another abstraction layer such as MCP?

In environments like Claude Code, many tasks really can be solved with a CLI:

  • Easier debugging through standard error output
  • No extra process management
  • Reuse of existing authentication systems
  • Many tools are already well documented

The MCP Response

MCP still delivers unique value:

  • Accessibility for non-developers: a CLI is developer-oriented, while MCP also works in GUI environments such as Claude Desktop
  • Enterprise governance: organizations can centrally manage which tools AI can access
  • Standardized interfaces: it provides the basis for a third-party tool ecosystem
  • Security isolation: MCP servers can mediate authentication and permissions

Practical Conclusion: CLI for Individuals, MCP for Organizations

This is not really an either-or choice. For individual developers moving quickly, the CLI is often more efficient. For organizations that need structured AI tool governance, MCP is the better fit.

Real comparisons from tooling show the difference clearly:

For Playwright:

  • Playwright MCP: around 22 core tools, AI-friendly interface, specialized for screenshots and accessibility analysis
  • Playwright CLI: 50+ commands, fine-grained control, easier integration with existing test infrastructure

For Supabase:

  • Supabase MCP: 32 tools, natural language control over remote databases, AI-agent integration
  • Supabase CLI: 20+ commands, local development setup, and migration management

3. Ten Recommended MCP Servers

As of March 2026, these are the servers that are actually worth using.

Context7 MCP: Keeps AI Coding Against Current Docs

A common failure mode in AI coding is getting code generated against outdated docs. Ask for a modern Next.js App Router example and the model may still return deprecated Pages Router code.

Context7 solves this by injecting the latest official documentation for a library into the AI context in real time. It is an open source MCP server from Upstash that reduces both deprecated API suggestions and hallucinated code.

How it works: detect keywords -> identify the library -> resolve the library ID -> query the latest docs -> inject context -> produce more accurate code

Sequential Thinking MCP: Improves Reasoning Quality

This MCP helps AI break complex problems into steps. It is especially useful in architecture design and multi-step reasoning tasks.

Playwright MCP: Control the Browser with Natural Language

This enables browser automation without writing code. Commands such as "Click the login button on this page and take a screenshot" become possible. It is useful for E2E test automation, web scraping, and UI validation.

GitHub MCP: Operate GitHub with Natural Language

Create issues, review PRs, search code, and manage branches with natural language requests such as "Show me the issues opened last week" or "Review the changes in this PR."

Figma MCP: Turn Design into Code

This server focuses on manipulating Figma designs through natural language and converting designs into code. Tasks such as "Convert the components in this Figma frame into React code" become possible.

Supabase MCP: Manage PostgreSQL in Natural Language

With 32 tools, Supabase MCP supports schema design, table creation, query execution, and Row Level Security setup through plain language commands.

TaskMaster AI MCP and Shrimp Task Manager MCP: AI-Powered Task Management

These MCP servers let AI break down and manage complex project work. Combined with Claude Code, they can support agent-led planning and execution.

Magic MCP: Visual UI Generation

Magic MCP can generate UI components instantly from natural language, which makes it useful for mockups and prototyping.

CODEX MCP: Code Indexing and Search

This server helps AI find relevant code quickly inside large codebases and inject the right context into the prompt. It is useful for legacy code analysis and large refactors.


4. MCP Token Optimization

One topic that cannot be ignored in real MCP usage is token efficiency. Token efficiency directly affects both cost and accuracy.

Why Token Optimization Matters

The list of tools and their descriptions returned by MCP servers consume context-window space. If you install 10 MCP servers, the tool definitions alone can cost thousands of tokens. That creates three problems:

  • Higher cost: every API call carries tool definitions
  • Lower performance: too much context gets consumed by tool metadata instead of the task
  • Tool-choice confusion: too many available tools can make it harder for the model to choose correctly

Optimization Strategies

  1. Enable only the MCP servers you need for the task at hand
  2. Simplify tool descriptions if the server ships with excessively verbose text
  3. Batch operations when several tool calls can be merged into one
  4. Cache results to avoid repeating the same queries

5. MCP, A2A, and AG-UI: Comparing Agent Protocols

MCP is not the only protocol in the agent ecosystem. Three major protocols now coexist.

MCP (Model Context Protocol)

  • Role: connects AI models to external tools and data
  • Analogy: the agent's hands and feet
  • Led by: Anthropic, then Linux Foundation
  • Characteristics: tool invocation, data access, and resource handling

A2A (Agent-to-Agent Protocol)

  • Role: connects one AI agent to another
  • Analogy: conversation between agents
  • Led by: Google
  • Characteristics: based on JSON-RPC 2.0, focused on cross-organization agent collaboration

AG-UI (Agent-User Interface Protocol)

  • Role: connects AI agents to user interfaces
  • Analogy: the face of the agent
  • Characteristics: streaming responses, progress display, and human approval requests

These three are complementary:

  • MCP lets agents use tools
  • A2A lets agents collaborate with other agents
  • AG-UI lets agents present work to users

Together they form a complete agent ecosystem.


6. Installing and Starting with MCP

MCP Setup in Claude Desktop

The standard approach is to add MCP servers to the Claude Desktop configuration file, claude_desktop_config.json:

{
  "mcpServers": {
    "context7": {
      "command": "npx",
      "args": ["-y", "@upstash/context7-mcp@latest"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token-here"
      }
    }
  }
}

MCP Setup in Claude Code

In Claude Code, setup is simpler with claude mcp add:

claude mcp add context7 -- npx -y @upstash/context7-mcp@latest
claude mcp add github -- npx -y @modelcontextprotocol/server-github

Installation Notes

  • Most servers require Node.js v18 or newer
  • Many servers need environment variables such as API keys or tokens before they work
  • MCP servers run as separate processes, so system resource usage matters

These days you can often just ask AI, "Install Context7 MCP for me," and it will do the setup. But understanding the underlying mechanism is still necessary when something breaks.


7. Agent Skills: Another Standard After MCP

Alongside MCP, another trend is becoming important in the agent ecosystem: Agent Skills. The Skills system Anthropic introduced for Claude Code has also been adopted by OpenAI Codex and Google Gemini CLI, making it a second major pillar next to MCP.

If MCP defines "what tools an agent can use," Skills define "what expertise the agent has." They complement each other: MCP gives access to tools, while Skills guide the agent in using those tools effectively.


Closing

MCP is one of the fastest-growing pieces of infrastructure in the AI agent ecosystem. In about a year, it moved from one company's experiment to an industry standard, while the third-party server ecosystem kept expanding.

Claims that "MCP is dead" miss the real pattern. The likely future is coexistence between CLI and MCP. Individual developers can absolutely start with the CLI, but organizations that want governed, reusable, standardized AI infrastructure will need MCP.

The best starting point is simple: install Context7 MCP first. One small change, adding "use context7" to a prompt, is often enough to make the value of MCP immediately obvious.

Back to blog