All articles
securityMCParchitecture2026-02-088 min read

How TameFlare Secures MCP Traffic Without MCP-Specific Code

MCP (Model Context Protocol) uses standard HTTP for its Streamable HTTP transport. TameFlare's transparent proxy already intercepts, logs, and enforces permissions on every MCP tool call - no special configuration needed.

What is MCP?

MCP (Model Context Protocol) is a JSON-RPC 2.0 protocol that connects AI hosts (Claude, ChatGPT, custom agents) to tool servers (GitHub MCP server, database MCP server, etc.). It's becoming the standard way agents interact with external tools.

MCP defines two transport mechanisms:

  • Streamable HTTP - The MCP server exposes an HTTP endpoint. The client sends JSON-RPC requests via standard HTTP POST. The server responds with JSON or Server-Sent Events (SSE).
  • stdio - The client launches the MCP server as a local subprocess and communicates via stdin/stdout.
  • The critical insight: Streamable HTTP is standard HTTP. And TameFlare is an HTTP proxy.

    What MCP traffic looks like on the wire

    When an agent calls an MCP tool, the HTTP request looks like this:

    POST /mcp HTTP/1.1
    Content-Type: application/json
    Accept: application/json, text/event-stream
    
    {
      "jsonrpc": "2.0",
      "id": 1,
      "method": "tools/call",
      "params": {
        "name": "create_pull_request",
        "arguments": {
          "repo": "acme/backend",
          "title": "Fix auth bug",
          "head": "fix-auth",
          "base": "main"
        }
      }
    }
    

    This is a completely standard HTTP POST with a JSON body. There's nothing MCP-specific about the transport layer. It's just HTTP.

    Why TameFlare already intercepts MCP

    TameFlare's gateway is a transparent HTTP/HTTPS proxy. When you run any process through tf run, all outbound HTTP traffic is routed through the proxy. This includes:

  • REST API calls to GitHub, Stripe, OpenAI
  • GraphQL queries
  • Webhook requests
  • MCP Streamable HTTP requests
  • The proxy doesn't need to know it's MCP traffic. It intercepts the HTTP request, checks the target domain against your connector configuration, enforces permissions, injects credentials from the vault, and logs the result.

    CapabilityStatus
    Intercept MCP HTTP trafficWorks today
    Log MCP requests in traffic logWorks today
    Domain-level permissionsWorks today
    Credential injectionWorks today
    Kill switchWorks today
    Per-tool MCP permissionsComing (MCP connector)

    How to govern MCP traffic with TameFlare

    Step 1: Set up TameFlare as usual

    npm install -g @tameflare/cli
    tf init
    

    Step 2: Add a connector for your MCP server's domain

    tf connector add (now configured in dashboard) generic \
      --domains mcp-server.example.com \
      --token-env MCP_AUTH_TOKEN
    

    Step 3: Set permissions

    tf permissions set --gateway "my-agent" \
      --connector generic \
      --action "http.*" \
      --decision allow
    

    Step 4: Run your agent through the proxy

    tf run -- "my-agent" python mcp_agent.py
    

    Every MCP tool call now passes through TameFlare. Every request is logged. Credentials are injected from the vault. The kill switch works instantly.

    What about competitors?

    Several competitors have built MCP-native proxies:

    ToolApproachTrade-off
    Agentgateway (Solo.io)MCP-native Rust proxyOnly handles MCP/A2A traffic
    ArchestraMCP platform with securityRequires MCP-specific infrastructure
    GuardionAIMCP-focused securityEarly-stage, narrow scope
    TameFlareGeneral HTTP proxyIntercepts ALL HTTP traffic including MCP
    TameFlare's approach is different: instead of building a protocol-specific proxy, we built a general-purpose HTTP proxy that happens to intercept MCP traffic along with everything else. This means:
  • One proxy for all traffic - MCP, REST, GraphQL, webhooks, everything
  • No migration needed - if you're already using TameFlare, MCP traffic is already governed
  • No vendor lock-in - if MCP evolves or a new protocol emerges, the HTTP proxy still works
  • The MCP connector (coming soon)

    Today, TameFlare treats MCP traffic as generic HTTP. That means permissions are domain-level (allow/deny mcp-server.example.com), not tool-level.

    We're building an MCP connector that will parse JSON-RPC tools/call messages into structured TameFlare actions:

  • Extract params.name as the action type (e.g., mcp.tools.create_pull_request)
  • Extract params.arguments as action parameters
  • Enable per-tool permissions: "allow read_file but deny delete_repository"
  • Log structured actions in the audit trail (not just raw HTTP)
  • This will bring MCP traffic to the same level of granularity as our GitHub, OpenAI, and Stripe connectors.

    What about stdio MCP?

    stdio transport is process-level IPC (stdin/stdout), not network traffic. TameFlare's proxy cannot intercept it. However:

  • Production MCP servers use Streamable HTTP. stdio is primarily for local development (Claude Desktop, Cursor).
  • The MCP ecosystem is moving toward HTTP. Remote MCP servers, cloud-hosted tools, and multi-agent systems all use HTTP transport.
  • If you need governance for stdio MCP, the practical solution is to switch to Streamable HTTP transport for production.
  • Getting started

    1. Create a free account - 3 gateways, 1,000 actions/month
    2. Install the CLI: npm install -g @tameflare/cli
    3. Add connectors for your MCP server domains
    4. Run your agent through the proxy: tf run -- "my-agent" python agent.py
    5. Check the traffic log - you'll see every MCP tool call
    No MCP-specific configuration. No special setup. If it's HTTP, TameFlare governs it.