Building Your First MCP Server
The Model Context Protocol (MCP) is the closest thing the AI ecosystem has to a universal adapter. It defines how AI models discover and call tools, and how those tools return structured results. Once you have an MCP server running, any compatible client — Claude, Claude Code, or any agent framework that speaks MCP — can use it without additional glue code.
This guide walks through building a real MCP server from scratch. Not a toy example, but a pattern you can extend into production use.
What MCP Actually Is
MCP is a JSON-RPC 2.0 protocol over stdio (or HTTP+SSE for remote servers). The client and server exchange a small set of message types:
tools/list— the client asks what tools are availabletools/call— the client invokes a tool with argumentsresources/list/resources/read— the client reads files, databases, or any context the server exposes
The server declares its tools upfront with a JSON Schema for each tool’s input. The client uses this schema to construct valid calls. The entire interaction is stateless from the client’s perspective — the server manages any necessary state internally.
This simplicity is the point. MCP is not a framework. It’s a contract that lets you wrap anything — a Postgres database, a Kubernetes cluster, a GitHub repo, a proprietary internal API — and expose it as a set of typed, callable functions that any AI model can use.
Setup
You’ll need Node.js 18+ and a TypeScript project. The official MCP SDK makes the boilerplate minimal:
mkdir mcp-server && cd mcp-servernpm init -ynpm install @modelcontextprotocol/sdk zodnpm install -D typescript @types/node tsxAdd a tsconfig.json:
{ "compilerOptions": { "target": "ES2022", "module": "Node16", "moduleResolution": "Node16", "strict": true, "outDir": "./dist" }, "include": ["src"]}And a start script in package.json:
{ "scripts": { "dev": "tsx src/index.ts", "build": "tsc", "start": "node dist/index.js" }}Building the Server
Create src/index.ts. The structure is always the same: initialize the server, register tools, connect to stdio transport.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";import { z } from "zod";
const server = new McpServer({ name: "my-mcp-server", version: "1.0.0",});Now register tools. Each tool has a name, a description the model uses to decide when to call it, and an input schema:
server.tool( "get_weather", "Get current weather conditions for a city", { city: z.string().describe("The city name to get weather for"), units: z.enum(["celsius", "fahrenheit"]).default("celsius"), }, async ({ city, units }) => { // Your actual implementation here const data = await fetchWeather(city, units); return { content: [ { type: "text", text: `Weather in ${city}: ${data.temp}°${units === "celsius" ? "C" : "F"}, ${data.condition}`, }, ], }; });Connect the server to the stdio transport:
const transport = new StdioServerTransport();await server.connect(transport);That’s the complete server skeleton. Everything else is adding tools.
Tool Design Patterns
The quality of your MCP server comes down to how well you design the tools. A few patterns that matter:
Narrow, composable tools over wide monoliths
A tool that does one thing is more useful than one that does many things. The model can combine narrow tools in unexpected ways; a wide tool boxes it in.
Avoid this:
server.tool("manage_database", "Do anything with the database", { operation: z.enum(["read", "write", "delete", "schema"]), query: z.string(), // ... many optional params})Prefer this:
server.tool("query_records", "Run a SELECT query and return rows as JSON", { table: z.string(), where: z.string().optional().describe("SQL WHERE clause, e.g. 'status = active'"), limit: z.number().default(50),})
server.tool("get_schema", "Return the column definitions for a table", { table: z.string(),})Describe inputs precisely
The model reads your describe() strings to know what to pass. Treat them like documentation for a developer who has never seen your codebase:
{ date_range: z.string().describe( "ISO 8601 date range in the format YYYY-MM-DD/YYYY-MM-DD. " + "Example: 2025-01-01/2025-03-31" ),}Vague descriptions produce wrong inputs. Precise descriptions produce correct ones.
Return structured text, not raw JSON
Models parse text more reliably than raw JSON blobs in tool results. Format your output:
return { content: [ { type: "text", text: [ `Found ${rows.length} records:`, ...rows.map(r => `- ${r.id}: ${r.name} (${r.status})`), ].join("\n"), }, ],};If the caller genuinely needs structured data for downstream processing, embed JSON inside a clearly labeled text block.
Error handling
Return errors as tool results, not thrown exceptions. The MCP SDK translates thrown errors into protocol-level errors, which clients handle differently than tool-level errors:
async ({ table, where }) => { try { const rows = await db.query(table, where); return { content: [{ type: "text", text: formatRows(rows) }] }; } catch (err) { return { content: [ { type: "text", text: `Query failed: ${err instanceof Error ? err.message : "unknown error"}. ` + `Check that the table name is correct and the WHERE clause uses valid column names.`, }, ], isError: true, }; }}The isError: true flag signals to the client that the result represents a failure, letting the agent decide how to recover.
A Real Example: GitHub Tool Server
Here’s a concrete tool that wraps the GitHub API to let an agent read repository contents:
import { Octokit } from "@octokit/rest";
const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN });
server.tool( "get_file_contents", "Read the contents of a file from a GitHub repository", { owner: z.string().describe("Repository owner (username or org)"), repo: z.string().describe("Repository name"), path: z.string().describe("File path within the repository"), ref: z.string().optional().describe("Branch, tag, or commit SHA. Defaults to the default branch."), }, async ({ owner, repo, path, ref }) => { try { const { data } = await octokit.rest.repos.getContent({ owner, repo, path, ref, });
if (Array.isArray(data)) { // It's a directory — return listing const listing = data.map(f => `${f.type === "dir" ? "📁" : "📄"} ${f.name}`).join("\n"); return { content: [{ type: "text", text: `Directory listing for ${path}:\n${listing}` }], }; }
if (data.type !== "file" || !data.content) { return { content: [{ type: "text", text: `${path} is not a readable file` }], isError: true, }; }
const content = Buffer.from(data.content, "base64").toString("utf-8"); return { content: [ { type: "text", text: `Contents of ${owner}/${repo}/${path}:\n\`\`\`\n${content}\n\`\`\``, }, ], }; } catch (err: unknown) { const message = err instanceof Error ? err.message : "Unknown error"; return { content: [{ type: "text", text: `Failed to read ${path}: ${message}` }], isError: true, }; } });This is a complete, production-ready tool. It handles directories, errors, and encoding — and wraps the result in text that’s easy for a model to read and cite.
Connecting to Claude Desktop
Once your server is running, connect it to Claude Desktop by editing its config at ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{ "mcpServers": { "my-server": { "command": "node", "args": ["/absolute/path/to/dist/index.js"], "env": { "GITHUB_TOKEN": "your_token_here" } } }}Restart Claude Desktop. Your tools appear in the tool selector automatically.
For development, use tsx directly:
{ "mcpServers": { "my-server-dev": { "command": "npx", "args": ["tsx", "/absolute/path/to/src/index.ts"] } }}Connecting to Claude Code
MCP servers connect to Claude Code via the CLI:
claude mcp add my-server node /absolute/path/to/dist/index.jsOr with environment variables:
claude mcp add my-server --env GITHUB_TOKEN=your_token node /path/to/dist/index.jsClaude Code will automatically use your tools when working on tasks that benefit from them. No additional prompting needed.
What to Build Next
An MCP server is essentially a capability boundary: what can the agent see and do? The design question is always which capabilities to expose and how to scope them safely.
A few directions worth exploring:
- Database servers — Expose read-only query access to Postgres, SQLite, or DynamoDB with schema introspection tools so the agent can discover the structure before querying
- Code analysis servers — Wrap tree-sitter or a language server protocol to give agents semantic understanding of codebases beyond simple file reads
- Monitoring servers — Bridge metrics systems (Prometheus, Datadog) so agents can investigate incidents with real observability data
- Documentation servers — Index and serve internal docs or runbooks, letting agents answer operational questions without needing to know where everything lives
The MCP ecosystem is still early. Most useful servers haven’t been built yet. The pattern is simple enough that anything you currently access via a dashboard or API is a reasonable candidate for an MCP server.
Build the one that solves a real problem in your environment, and the protocol will take care of the rest.
Related Articles
- Introducing Agentic Development
- Tool Use Patterns: Building Reliable Agent-Tool Interfaces
- Multi-Agent Patterns: Orchestrators, Workers, and Pipelines