From Zero to MCP: Simplifying AI Integrations with xmcp

0
5


The AI ecosystem is evolving rapidly, and Anthropic releasing the Model Context Protocol on November 25th, 2024 has certainly shaped how LLM’s connect with data. No more building custom integrations for every data source: MCP provides one protocol to connect them all. But here’s the challenge: building MCP servers from scratch can be complex.

TL;DR: What is MCP?

Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect devices to various peripherals, MCP provides a standardized way to connect AI models to different data sources, tools, and services. It’s an open protocol that enables AI applications to safely and efficiently access external context – whether that’s your company’s database, file systems, APIs, or custom business logic.

Source: https://modelcontextprotocol.io/docs/getting-started/intro

In practice, this means you can hook LLMs into the things you already work with every day. To name a few examples, you could query databases to visualize trends, pull and resolve issues from GitHub, fetch or update content to a CMS, and so on. Beyond development, the same applies to broader workflows: customer support agents can look up and resolve tickets, enterprise search can fetch and read content scattered across wikis and docs, operations can monitor infrastructure or control devices.

But there’s more to it, and that’s when you really unlock the power of MCP. It’s not just about single tasks, but rethinking entire workflows. Suddenly, we’re shaping our way to interact with products and even our own computers: instead of adapting ourselves to the limitations of software, we can shape the experience around our own needs.

That’s where xmcp comes in: a TypeScript framework designed with DX in mind, for developers who want to build and ship MCP servers without the usual friction. It removes the complexity and gets you up and running in a matter of minutes.

A little backstory

xmcp was born out of necessity at Basement Studio, where we needed to build internal tools for our development processes. As we dove deeper into the protocol, we quickly discovered how fragmented the tooling landscape was and how much time we were spending on setup, configuration, and deployment rather than actually building the tools our team needed.

That’s when we decided to consolidate everything we’d learned into a framework. The philosophy was simple: developers shouldn’t have to become experts just to build AI tools. The focus should be on creating valuable functionality, not wrestling with boilerplate code and all sorts of complexities.

Key features & capabilities

xmcp shines in its simplicity. With just one command, you can scaffold a complete MCP server:

npx create-xmcp-app@latest

The framework automatically discovers and registers tools. No extra setup needed.

All you need is tools/

xmcp abstracts the original tool syntax from the TypeScript SDK and follows a SOC principle, following a simple three-exports structure:

  • Implementation: The actual tool logic.
  • Schema: Define input parameters using Zod schemas with automatic validation
  • Metadata: Specify tool identity and behavior hints for AI models
// src/tools/greet.ts
import { z } from "zod";
import { type InferSchema } from "xmcp";

// Define the schema for tool parameters
export const schema = {
  name: z.string().describe("The name of the user to greet"),
};

// Define tool metadata
export const metadata = {
  name: "greet",
  description: "Greet the user",
  annotations: {
    title: "Greet the user",
    readOnlyHint: true,
    destructiveHint: false,
    idempotentHint: true,
  },
};

// Tool implementation
export default async function greet({ name }: InferSchema<typeof schema>) {
  return `Hello, ${name}!`;
}

Transport Options

  • HTTP: Perfect for server deployments, enabling tools that fetch data from databases or external APIs
  • STDIO: Ideal for local operations, allowing LLMs to perform tasks directly on your machine

You can tweak the configuration to your needs by modifying the xmcp.config.ts file in the root directory. Among the options you can find the transport type, CORS setup, experimental features, tools directory, and even the webpack config. Learn more about this file here.

const config: XmcpConfig = {
  http: {
    port: 3000,
    // The endpoint where the MCP server will be available
    endpoint: "/my-custom-endpoint",
    bodySizeLimit: 10 * 1024 * 1024,
    cors: {
      origin: "*",
      methods: ["GET", "POST"],
      allowedHeaders: ["Content-Type"],
      credentials: true,
      exposedHeaders: ["Content-Type"],
      maxAge: 600,
    },
  },

  webpack: (config) => {
    // Add raw loader for images to get them as base64
    config.module?.rules?.push({
      test: /\.(png|jpe?g|gif|svg|webp)$/i,
      type: "asset/inline",
    });

    return config;
  },
};

Built-in Middleware & Authentication

For HTTP servers, xmcp provides native solutions to add Authentication (JWT, API Key, OAuth). You can always leverage your application by adding custom middlewares, which can even be an array.

import { type Middleware } from 'xmcp';

const middleware: Middleware = async (req, res, next) => {
  // Custom processing
  next();
};

export default middleware;

Integrations

While you can bootstrap an application from scratch, xmcp can also work on top of your existing Next.js or Express project. To get started, run the following command:

npx init-xmcp@latest

on your initialized application, and you are good to go! You’ll find a tools directory with the same discovery capabilities. If you’re using Next.js the handler is set up automatically. If you’re using Express, you’ll have to configure it manually.

From zero to prod

Let’s see this in action by building and deploying an MCP server. We’ll create a Linear integration that fetches issues from your backlog and calculates completion rates, perfect for generating project analytics and visualizations.

For this walkthrough, we’ll use Cursor as our MCP client to interact with the server.

Setting up the project

The fastest way to get started is by deploying the xmcp template directly from Vercel. This automatically initializes the project and creates an HTTP server deployment in one click.

Alternative setup: If you prefer a different platform or transport method, scaffold locally with npx create-xmcp-app@latest

Once deployed, you’ll see this project structure:

Building our main tool

Our tool will accept three parameters: team name, start date, and end date. It’ll then calculate the completion rate for issues within that timeframe.

Head to the tools directory, create a file called get-completion-rate.ts and export the three main elements that construct the syntax:

import { z } from "zod";
import { type InferSchema, type ToolMetadata } from "xmcp";

export const schema = {
  team: z
    .string()
    .min(1, "Team name is required")
    .describe("The team to get completion rate for"),
  startDate: z
    .string()
    .min(1, "Start date is required")
    .describe("Start date for the analysis period (YYYY-MM-DD)"),
  endDate: z
    .string()
    .min(1, "End date is required")
    .describe("End date for the analysis period (YYYY-MM-DD)"),
};

export const metadata: ToolMetadata = {
  name: "get-completion-rate",
  description: "Get completion rate analytics for a specific team over a date range",
};

export default async function getCompletionRate({
  team,
  startDate,
  endDate,
}: InferSchema<typeof schema>) {
// tool implementation we'll cover in the next step
};

Our basic structure is set. We now have to add the client functionality to actually communicate with Linear and get the data we need.

We’ll be using Linear’s personal API Key, so we’ll need to instantiate the client using @linear/sdk . We’ll focus on the tool implementation now:

export default async function getCompletionRate({
  team,
  startDate,
  endDate,
}: InferSchema<typeof schema>) {

    const linear = new LinearClient({
        apiKey: // our api key
    });

};

Instead of hardcoding API keys, we’ll use the native headers utilities to accept the Linear API key securely from each request:

export default async function getCompletionRate({
  team,
  startDate,
  endDate,
}: InferSchema<typeof schema>) {

    // API Key from headers
    const apiKey = headers()["linear-api-key"] as string;

    if (!apiKey) {
        return "No linear-api-key header provided";
    }

    const linear = new LinearClient({
        apiKey: apiKey,
    });
    
    // rest of the implementation
}

This approach allows multiple users to connect with their own credentials. Your MCP configuration will look like:

"xmcp-local": {
  "url": "http://127.0.0.1:3001/mcp",
  "headers": {
    "linear-api-key": "your api key"
  }
}

Moving forward with the implementation, this is what our complete tool file will look like:

import { z } from "zod";
import { type InferSchema, type ToolMetadata } from "xmcp";
import { headers } from "xmcp/dist/runtime/headers";
import { LinearClient } from "@linear/sdk";

export const schema = {
  team: z
    .string()
    .min(1, "Team name is required")
    .describe("The team to get completion rate for"),
  startDate: z
    .string()
    .min(1, "Start date is required")
    .describe("Start date for the analysis period (YYYY-MM-DD)"),
  endDate: z
    .string()
    .min(1, "End date is required")
    .describe("End date for the analysis period (YYYY-MM-DD)"),
};

export const metadata: ToolMetadata = {
  name: "get-completion-rate",
  description: "Get completion rate analytics for a specific team over a date range",
};

export default async function getCompletionRate({
  team,
  startDate,
  endDate,
}: InferSchema<typeof schema>) {

    // API Key from headers
    const apiKey = headers()["linear-api-key"] as string;

    if (!apiKey) {
        return "No linear-api-key header provided";
    }

    const linear = new LinearClient({
        apiKey: apiKey,
    });

    // Get the team by name
    const teams = await linear.teams();
    const targetTeam = teams.nodes.find(t => t.name.toLowerCase().includes(team.toLowerCase()));

    if (!targetTeam) {
        return `Team "${team}" not found`
    }

    // Get issues created in the date range for the team
    const createdIssues = await linear.issues({
        filter: {
            team: { id: { eq: targetTeam.id } },
            createdAt: {
                gte: startDate,
                lte: endDate,
            },
        },
    });

    // Get issues completed in the date range for the team (for reporting purposes)
    const completedIssues = await linear.issues({
        filter: {
            team: { id: { eq: targetTeam.id } },
            completedAt: {
                gte: startDate,
                lte: endDate,
            },
        },
    });

    // Calculate completion rate: percentage of created issues that were completed
    const totalCreated = createdIssues.nodes.length;
    const createdAndCompleted = createdIssues.nodes.filter(issue => 
        issue.completedAt !== undefined && 
        issue.completedAt >= new Date(startDate) && 
        issue.completedAt <= new Date(endDate)
    ).length;
    const completionRate = totalCreated > 0 ? (createdAndCompleted / totalCreated * 100).toFixed(1) : "0.0";

    // Structure data for the response
    const analytics = {
        team: targetTeam.name,
        period: `${startDate} to ${endDate}`,
        totalCreated,
        totalCompletedFromCreated: createdAndCompleted,
        completionRate: `${completionRate}%`,
        createdIssues: createdIssues.nodes.map(issue => ({
            title: issue.title,
            createdAt: issue.createdAt,
            priority: issue.priority,
            completed: issue.completedAt !== null,
            completedAt: issue.completedAt,
        })),
        allCompletedInPeriod: completedIssues.nodes.map(issue => ({
            title: issue.title,
            completedAt: issue.completedAt,
            priority: issue.priority,
        })),
    };

    return JSON.stringify(analytics, null, 2);
}

Let’s test it out!

Start your development server by running pnpm dev (or the package manager you’ve set up)

The server will automatically restart whenever you make changes to your tools, giving you instant feedback during development. Then, head to Cursor Settings → Tools & Integrations and toggle the server on. You should see it’s discovering one tool file, which is our only file in the directory.

Let’s now use the tool by querying to “Get the completion rate of the xmcp project between August 1st 2025 and August 20th 2025”.

Let’s try using this tool in a more comprehensive way: we want to understand the project’s completion rate in three separate months, June, July and August, and visualize the tendency. So we will ask Cursor to retrieve the information for these months, and generate a tendency chart and a monthly issue overview:

Once we’re happy with the implementation, we’ll push our changes and deploy a new version of our server.

Pro tip: use Vercel’s branch deployments to test new tools safely before merging to production.

Next steps

Nice! We’ve built the foundation, but there’s so much more you can do with it.

  • Expand your MCP toolkit with a complete workflow automation. Take this MCP server as a starting point and add tools that generate weekly sprint reports and automatically save them to Notion, or build integrations that connect multiple project management platforms.
  • Leverage the application by adding authentication. You can use the OAuth native provider to add Linear’s authentication instead of using API Keys, or use the Better Auth integration to handle custom authentication paths that fit your organization’s security requirements.
  • For production workloads, you may need to add custom middlewares, like rate limiting, request logging, and error tracking. This can be easily set up by creating a middleware.ts file in the source directory. You can learn more about middlewares here.

Final thoughts

The best part of what you’ve built here is that xmcp handled all the protocol complexity for you. You didn’t have to learn the intricacies of the Model Context Protocol specification or figure out transport layers: you just focused on solving your actual business problem. That’s exactly how it should be.

Looking ahead, xmcp’s roadmap includes full MCP specification compliance, bringing support for resources, prompts and elicitation. More importantly, the framework is evolving to bridge the gap between prototype and production, with enterprise-grade features for authentication, monitoring, and scalability.

If you wish to learn more about the framework, visit xmcp.dev, read the documentation and check out the examples!

Source link