Skip to content
John R Cottam

Build MCP Servers on Your Favorite Stack

MCP servers don't play nicely with Bun and Hono out of the box. Vercel's mcp-handler fixes that. Here's exactly how I set it up.

John Ryan Cottam

4 min read

Build MCP Servers on Your Favorite Stack

I’ve been building MCP servers for a while now — wiring Claude Code, Cursor, and other AI clients into real APIs and databases. The protocol itself is straightforward. The runtime quirks are not.

Bun and Hono are my stack of choice when it comes to building APIs. Fast, minimal, deploys to serverless platforms. But MCP’s streaming transport — SSE handshakes, chunked responses, keep-alive handling — breaks in subtle ways depending on your runtime. Bun handles fetch differently than Node. Hono’s response model doesn’t always play nicely with raw streams. You get cryptic disconnections and tools that appear to register but never actually execute.

Vercel’s mcp-handler cuts through all of that. It abstracts the transport layer entirely — you define what your tools do, it handles how they talk to clients.

The full setup

Install the package:

bun add mcp-handler zod

Then wire it up:

import { Hono } from "hono";
import { createMcpHandler } from "mcp-handler";
import { z } from "zod";
import figlet from "figlet";

const app = new Hono();

const handler = createMcpHandler(
  (server) => {
    server.tool(
      "createAsciiArt",
      "Create ASCII art from text using figlet",
      { text: z.string() },
      async ({ text }) => {
        const art = figlet.textSync(text);
        return {
          content: [{ type: "text", text: art }],
        };
      },
    );
  },
  {},
  { basePath: "/", maxDuration: 60 },
);

app.all("/mcp/*", async (c) => {
  return await handler(c.req.raw);
});

export default app;

That’s the pattern. Define your tool, mount the handler, ship it.

Full implementation on GitHub →

What the handler actually does

The three arguments tell the full story:

  1. The server callback — where you register tools. Each tool gets a name, a description (used by the model to decide when to call it — write this carefully), a Zod schema for input validation, and an async handler.

  2. Server options — auth callbacks, metadata, session config. Most of the time you leave this empty and add auth later.

  3. Handler optionsbasePath tells the handler where it’s mounted. maxDuration sets the max execution time in seconds. Raise this for tools that do heavy work.

Under the hood it manages:

  • The MCP protocol handshake
  • SSE transport vs. HTTP streaming negotiation
  • Message serialization and framing
  • Runtime differences between Bun and Node
  • Concurrent request handling

You get none of that complexity. You just write the tool logic.

Write tools that are actually useful

The description field matters more than the implementation. It’s what the model reads to decide whether to call your tool. Vague descriptions mean missed calls or wrong calls.

Weak:

server.tool("search", "Search for things", { query: z.string() }, ...)

Better:

server.tool(
  "searchProducts",
  "Search the product catalog by name, SKU, or category. Returns matching products with price, inventory, and description.",
  { query: z.string(), category: z.string().optional() },
  ...
)

The model makes better decisions when it knows exactly what the tool returns and when to use it.

Test before you wire anything up

Don’t go straight to Claude Code or Cursor. Use the MCP Inspector first:

npx @modelcontextprotocol/inspector

Connect to your local server, find your tool, call it with test inputs. Verify the response shape before you involve an AI client. Debugging a broken tool through a chat interface is painful — the inspector makes it obvious where things are failing.

Connect your client

Once the server is running, add it to your MCP client config:

{
  "mcpServers": {
    "my-server": {
      "url": "http://localhost:3000/mcp"
    }
  }
}

Config location varies by client:

  • Cursor: Settings → MCP → Add server
  • Claude Code: ~/.claude.json under mcpServers

Switch the URL to your deployed server URL when you go to production. The format is the same everywhere.

Why MCP is worth the investment

MCP is becoming the standard protocol for AI-tool integration. It’s how agents talk to your APIs, your databases, your internal services — anything that isn’t baked into the model itself.

The value compounds fast. Build a tool once, and it’s available across every MCP-compatible client. Claude Code, Cursor, your custom assistant — they all get access without you writing separate integrations. It’s the same logic that made REST APIs worth standardizing: one interface, many consumers.

The teams that win the next few years aren’t just using AI — they’re giving it reach into their actual systems. MCP is how you do that cleanly.

mcp-handler is the fastest path from idea to working integration. No transport debugging, no runtime gotchas. Just tools.

Resources

Subscribe to my notes

Stay in the loop on what I'm building and thinking about.