Six months ago, we started looking at the Model Context Protocol.

At the time, MCP was gaining momentum in the developer community but was mostly being used for local file access, code tools, and simple APIs. Nobody was really using it for production marketing data — real OAuth connections, live ad platform data, multi-account workspaces. We thought that was an interesting gap.

This is the story of what we built, what surprised us, and what we’d do differently.

What we were trying to solve

Detrics already connected to 25+ marketing platforms — Facebook Ads, Google Ads, TikTok, Shopify, and more. Our users could query that data through Google Sheets and Looker Studio. But the feedback we kept hearing was: “Can I just ask it a question?”

The mental model of querying marketing data shouldn’t require you to know GAQL, or build a Looker dashboard, or export CSVs. You should be able to say “Compare my Google and Meta ROAS last month” and get an answer.

MCP offered exactly that interface. An AI assistant that could call our APIs and present the results in natural language.

Choosing the transport

MCP supports multiple transports. The obvious choice for a remote server is SSE (Server-Sent Events), which is what most hosted MCP implementations used at launch. But by the time we started building, StreamableHTTP (stateless HTTP) had emerged as the preferred transport for remote servers.

StreamableHTTP is simpler: each request is a stateless POST, the server responds with JSON (or streaming chunks for longer operations), and there’s no persistent connection to manage. For a production server running behind a load balancer, this is much easier to operate than long-lived SSE connections.

We went with StreamableHTTP from the start and haven’t regretted it. Deployments are simple, horizontal scaling works, and debugging is straightforward.

OAuth 2.1 — the hard part

The real complexity wasn’t the MCP protocol itself. It was the OAuth 2.1 flow that makes remote MCP servers work with Claude and ChatGPT.

When Claude connects to a remote MCP server, it doesn’t just send a request — it first discovers the server’s authorization endpoint, redirects the user to log in, exchanges a code for a token, and then uses that token for subsequent MCP requests. This is standard OAuth 2.1 with PKCE, but there are several MCP-specific requirements that aren’t always obvious:

RFC 9728: OAuth Protected Resource Metadata

Before doing anything, MCP clients fetch /.well-known/oauth-protected-resource to discover the authorization server URL. They also try a path-specific variant: /.well-known/oauth-protected-resource/mcp.

We initially only had the base path registered. Claude was trying the path-specific variant first, getting a 404, and failing. Adding that second route handler fixed it.

// Both routes need the same handler
router.get("/.well-known/oauth-protected-resource", oauthProtectedResourceHandler);
router.get("/.well-known/oauth-protected-resource/mcp", oauthProtectedResourceHandler);

WWW-Authenticate header on 401

When MCP clients receive a 401 Unauthorized response, they look at the WWW-Authenticate header to find where to authenticate. Without it, they have no idea where to redirect the user.

WWW-Authenticate: Bearer realm="Detrics", resource_metadata="https://mcp.detrics.io/.well-known/oauth-protected-resource"

Missing this header caused silent failures — clients would receive 401s and not know what to do next.

Root path vs. /mcp path

This one surprised us. After the OAuth flow completes, Claude sends the first MCP request to the authorization server base URL, not to a path like /mcp. So if your connector URL is https://mcp.detrics.io, Claude sends POST / — not POST /mcp.

We had our MCP handler mounted at /mcp. The fix was an Nginx rewrite rule that redirects root path requests to the MCP handler. And the implication is that the connector URL users enter should be the clean base URL (https://mcp.detrics.io), not https://mcp.detrics.io/mcp.

This is actually more intuitive for users — it’s consistent with how Slack, Linear, and other MCP servers expose their endpoints. But it took some debugging to figure out.

The getBaseUrl() function

Our server runs behind an Nginx reverse proxy. MCP metadata endpoints (like the OAuth protected resource document) need to return the public URL of the server, not localhost:3000.

We built a small helper that reads the X-Forwarded-Host and X-Forwarded-Proto headers set by Nginx:

function getBaseUrl(req: Request): string {
  const host = req.headers["x-forwarded-host"] || req.headers["host"];
  const proto = req.headers["x-forwarded-proto"] || "https";
  return `${proto}://${host}`;
}

The critical thing: Nginx must set these headers correctly. Without proxy_set_header X-Forwarded-Host $host, the server returns localhost:3000 in its metadata, which causes OAuth clients to try to reach the wrong URL.

AI Contexts: the product insight

From a pure MCP perspective, the server is an API wrapper with authentication. What made it actually useful was the concept of AI Contexts.

The problem with raw marketing data queries is specificity. To query Facebook Ads data, you need to specify a platform identifier (facebook_ads), an account ID (a 16-digit number), metrics, dimensions, and a date range. That’s a lot of information for a conversational prompt.

AI Contexts are saved query configurations — you pick your platforms and accounts once, give the Context a name, and then your AI assistant can reference it by name. The MCP server stores these in your Detrics workspace and resolves the full configuration when a query runs.

From the user’s perspective:

“Show me the weekly performance report”

replaces:

“Query facebook_ads for account 1234567890, show me spend and ROAS for the last 7 days. Also query google_ads for account 9876543210 with the same metrics.”

This is the key insight: the hard problem with conversational data access isn’t the query execution, it’s reducing the friction of specifying context. AI Contexts solve that.

Prompt Templates: reducing blank page syndrome

The other product insight was that users often didn’t know how to phrase their questions. “Ask anything!” is not a useful onboarding experience.

We built a library of 40+ prompt templates — pre-written prompts for common analysis tasks. The marketing value is obvious (SEO, content), but the product value is more important: templates teach users the language of MCP queries. Once someone has used a few templates, they start writing their own variations naturally.

Templates are served from the Detrics web app at /mcp/templates. Users copy a prompt, paste it into Claude or ChatGPT, and customize from there.

Dedicated subdomain: mcp.detrics.io

We initially hosted MCP at oauthdev.detrics.io/mcp — just tacked onto an existing server. When we decided to ship this properly, we moved it to a dedicated subdomain: mcp.detrics.io.

The reasons:

  1. Clear identity. mcp.detrics.io is unambiguous — it’s the Detrics MCP server.
  2. Consistent with the ecosystem. Most production MCP servers use a clean subdomain (mcp.slack.com, mcp.linear.app, etc.).
  3. Operational independence. The MCP server can scale independently of the main API.

Setting it up required DNS records, Nginx virtual host configuration, and SSL certificates via Certbot. The whole thing took about an hour.

What we’d do differently

Start with a dedicated subdomain. We wasted time debugging URL issues that arose from having MCP mounted at a path on a shared server. Starting with mcp.yourproduct.com from day one avoids the confusion.

Add the path-specific well-known route early. The spec allows clients to try both /.well-known/oauth-protected-resource and /.well-known/oauth-protected-resource/mcp. Implement both from the start.

Test with multiple clients. Claude and ChatGPT implement the MCP spec slightly differently. Something that works in Claude might fail in ChatGPT (and vice versa). Test both early.

The WWW-Authenticate header is not optional. Even if you implement everything else correctly, missing this header causes silent 401 failures that are hard to debug. Add it on day one.

Where we are now

The Detrics MCP server is running in production at mcp.detrics.io. It supports Claude, ChatGPT, Claude Code, Gemini CLI, and Codex. It connects to 25+ marketing platforms, supports AI Contexts, and includes 40+ prompt templates.

If you want to try it, you can connect from app.detrics.io/mcp — the whole setup takes under 5 minutes.

If you’re building your own MCP server and have questions about any of what we covered here, reach out at support.detrics.io. We’re happy to share more detail.