Skip to content
MCP, Model Context Protocol: A Simple Guide

MCP, Model Context Protocol: A Simple Guide

Updated: at 07:40 AM

If you've been following the MCP hype and thinking all this time, "this isn't anything new", you were right.

MCPs provide a standardized way to expose resources or capabilities to LLMs. It’s more like a standard than a technology breakthrough. And now you’re thinking, we can expose app capabilities through REST endpoints as we’ve been doing it for decades?

Again, you’re not wrong.

So why do we need a standard?

Imagine we’re creating an app that can suggest events in a city based on the weather. If it’s a rainy day, you might go to a tech conference and listen to your favorite tech personality. If it’s sunny, you might listen to some music in the open.

The architecture before MCPs for such an app looked something like this:

Then you also might have used multiple weather API, in case one wasn’t available or you wanted to compare different results:

# Weather API 1
GET /api/v1/london/forecast-10-days
# Weather API 2
GET /api/v1/forecast?d=10&city=london

The LLM doesn't know how to interact with these APIs, so you'd have to hardcode the host, the endpoints, the HTTP method, the query keys or path params, the request and response format.

And this works fine until you have two weather APIs around the globe, but what if there are more? Similar to this, there can be tons of websites suggesting events, but each doing it slightly differently.

If you want to create an LLM that can suggest events based on the weather for the next few days, this is a lot of endpoints to hardcode.

But this is where MCP comes into play!

The MCP Architecture

MCP is a client-server architecture at its core.

The Host

The host is your application, which gathers input from the user, such as where they are located right now and when they’re looking for some activity.

The host initiates the connection to the MCP Server using an MCP client.

MCP Client

The client is part of the Host application, and its role is to:

Here’s an example of calling the Weather MCP server for a specific city:

import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { HttpClientTransport } from "@modelcontextprotocol/sdk/client/http.js";

async function useWeatherMCP() {
  const client = new Client({ name: "weather-client", version: "1.0.0" });
  await client.connect(new HttpClientTransport({ url: "http://localhost:3000" }));

  const result = await client.executeResource({
    name: "weather/forecast",
    arguments: { city: "london" }
  });

  console.log("Weather forecast:", result.data.forecast);
}

MCP Server

The MCP Server is responsible for the execution of prompts, tool calls, and resource access.

Here’s an example of a Weather MCP that returns the forecast for a city based on some static data:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { HttpServerTransport } from "@modelcontextprotocol/sdk/server/http.js";

const weatherData = {
  "london": [{ date: "2025-04-27", temperature: 15, condition: "Sunny" }]
};

const server = new Server(
  { name: "weather-server", version: "1.0.0" },
  { capabilities: { resources: { weather: {} } } }
);

// This is how you expose the weather forecast resource
server.setRequestHandler("ListResources", async () => ({
  resources: [{
    name: "weather/forecast",
    description: "Get weather forecast",
    arguments: [{ name: "city", required: true }],
    mimeType: "application/json",
  }]
}));

// Handle weather forecast requests
server.setRequestHandler("ExecuteResource", async (request) => {
  if (request.params.name === "weather/forecast") {
    // This is where the magic happens, but I'm just returning some dummy data
    const city = request.params.arguments.city.toLowerCase();
    return { data: { forecast: weatherData[city] || [] } };
  }
});

new HttpServerTransport({ port: 3000 }).connect(server);

What makes MCPs special?

The protocol.

A standardized way to list available resources, tools, and prompts that the MCP can recognize, run, and answer.

No new AI black magic here, simply an agreement on how MCP Servers and Clients communicate.

If I had to summarize the main takeaway of all this, to me it would be that MCP Client will always know what they can do with MCP Servers using the following API:

client.listPrompts
client.listResources
client.listTools

We went from storing how we can interact with different servers and APIs to asking those APIs what they can do for us.

This is so much better than having those endpoints hardcoded because the LLMs can just ask the servers what they can do for them.

This eliminates most of the work that came from staying up to date with API formats, for example, adapting your app to handle a new URL or the removal of a field from the response.

Further Read

What’s Next?

Try to run an MCP Server.

Here’s how to get started:

  1. Create a node TypeScript project with Node and use the MCP Server from the Quickstart here.

  2. Run the server

  3. Use the Inspector tool to list the available resources

I’ll be writing a short and useful book on building MCP Servers, Clients, and using them in your apps. If you’d like to get an email when the book is out, you can subscribe here.

Generative AI with React JS: Build and Deploy Powerful AI Apps

Learn how to leverage the OpenAI API in React to create advanced generative AI applications. Throughout the book, you'll cover topics like generating text, speech post-processing, building a social media companion app, and deploying the final application.

Learn More
Generative AI with React JS: Build and Deploy Powerful AI Apps

What to Read Next