3 minute read

Rethinking Model Connectivity

The Model Context Protocol (MCP) is not just another framework, it’s an attempt to define a universal interface between AI models and the outside world. The idea is simple but powerful: just as USB-C standardized how devices connect, MCP aims to standardize how models access data, tools, and workflows.

When a model connects to a remote system, it needs to ask questions, call functions, and get structured answers. Doing this consistently across arbitrary APIs is chaotic. MCP provides a vocabulary (tools, resources, prompts, and notifications) and a grammar to govern how those interactions happen. That grammar is JSON-RPC 2.0.

Why JSON-RPC 2.0 and Not Just “Plain JSON”?

At first glance, you might think: why not just exchange regular JSON payloads over HTTP? The reason is discipline and symmetry. JSON-RPC gives structure to what would otherwise be arbitrary JSON blobs. Every message has a method, parameters, and an id that links requests and responses together. That means bidirectional communication—models and servers can both issue calls and respond to each other asynchronously.

Here’s a minimal example:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": { "name": "get_weather", "arguments": { "city": "São Paulo" } }
}

And the response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": { "temp": 26.1, "unit": "C" }
}

By encoding the intent (method) and the context (params) explicitly, MCP ensures that even if transports differ, semantics don’t drift. JSON-RPC 2.0 also defines notifications (messages without id) for streaming progress or async updates, and batch requests for efficiency. Those patterns map cleanly to modern AI workloads where LLMs issue multiple concurrent calls.

It’s not just JSON; it’s JSON with a contract.

The Transport Layer: stdio vs HTTP/SSE

Underneath that data layer, MCP can ride on different transports. Two official ones are:

  • stdio (standard input/output): lightweight, direct communication between processes on the same machine. No HTTP stack, no latency from serialization. Perfect for local integrations where the model runs as a subprocess or container neighbor.
  • HTTP/SSE (Server-Sent Events): for remote connections, where servers stream updates to the client. SSE keeps a long-lived HTTP connection open, allowing the server to push incremental responses—essential for long-running tools or progressive results.

Because JSON-RPC defines message framing independently of transport, the same logical message works both locally (stdio) and remotely (HTTP/SSE). That’s part of what makes MCP so composable.

The Data Layer: Semantics and Structure

The data layer is where MCP defines meaning—what’s a tool, what’s a resource, how discovery and capability negotiation happen. When a client connects, they exchange metadata about supported versions and features, then the server exposes structured objects representing callable operations or readable data surfaces.

What’s elegant here is how context becomes first-class: prompts, resource metadata, and even error messages are all standardized. This consistency opens the door for AI models to orchestrate complex multi-tool workflows without hardcoding per-API glue.

Early Experiments and Observations

I’ve started building a few small MCP servers that wrap local databases and APIs—essentially turning each service into a self-describing endpoint. The process of exposing SQL queries, REST endpoints, and prompt templates as native tools is refreshingly straightforward once you adopt JSON-RPC 2.0’s structure.

Error handling also feels clean: every failed call carries a typed error object with a code, message, and optional diagnostic data. When you’re chaining multiple tool calls, this granularity matters.


Next Steps: Implementing with FastMCP

The next phase is hands-on: experimenting with FastMCP, an open implementation that wraps MCP in Python’s async stack (FastAPI + Uvicorn). FastMCP abstracts much of the boilerplate, letting you expose tools and resources declaratively and serve them via both stdio and HTTP/SSE out of the box.

My plan:

  1. Stand up a FastMCP server that exposes my internal metrics database and CPI forecasting API as MCP tools
  2. Integrate FastMCP into an agentic workflow where ChatGPT or Claude connects directly to these resources via MCP without manual API wiring.
  3. Experiment with structured logging and telemetry around JSON-RPC calls—latency, payload sizes, and error rates.

If it works as cleanly as it looks on paper, FastMCP might become the default scaffolding for connecting private data stacks to LLMs safely and efficiently.


Closing Thoughts

MCP’s power isn’t in its complexity—it’s in its restraint. JSON-RPC 2.0 keeps the wire clean, stdio and SSE cover both local and remote use cases, and the data layer keeps semantics consistent. For anyone building agentic systems that need to reach beyond a sandbox, this protocol is worth watching closely.