How TameFlare Secures MCP Traffic Without MCP-Specific Code
MCP (Model Context Protocol) uses standard HTTP for its Streamable HTTP transport. TameFlare's transparent proxy already intercepts, logs, and enforces permissions on every MCP tool call - no special configuration needed.
What is MCP?
MCP (Model Context Protocol) is a JSON-RPC 2.0 protocol that connects AI hosts (Claude, ChatGPT, custom agents) to tool servers (GitHub MCP server, database MCP server, etc.). It's becoming the standard way agents interact with external tools.
MCP defines two transport mechanisms:
The critical insight: Streamable HTTP is standard HTTP. And TameFlare is an HTTP proxy.
What MCP traffic looks like on the wire
When an agent calls an MCP tool, the HTTP request looks like this:
POST /mcp HTTP/1.1
Content-Type: application/json
Accept: application/json, text/event-stream
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "create_pull_request",
"arguments": {
"repo": "acme/backend",
"title": "Fix auth bug",
"head": "fix-auth",
"base": "main"
}
}
}
This is a completely standard HTTP POST with a JSON body. There's nothing MCP-specific about the transport layer. It's just HTTP.
Why TameFlare already intercepts MCP
TameFlare's gateway is a transparent HTTP/HTTPS proxy. When you run any process through tf run, all outbound HTTP traffic is routed through the proxy. This includes:
The proxy doesn't need to know it's MCP traffic. It intercepts the HTTP request, checks the target domain against your connector configuration, enforces permissions, injects credentials from the vault, and logs the result.
| Capability | Status |
|---|---|
| Intercept MCP HTTP traffic | Works today |
| Log MCP requests in traffic log | Works today |
| Domain-level permissions | Works today |
| Credential injection | Works today |
| Kill switch | Works today |
| Per-tool MCP permissions | Coming (MCP connector) |
How to govern MCP traffic with TameFlare
Step 1: Set up TameFlare as usual
npm install -g @tameflare/cli
tf init
Step 2: Add a connector for your MCP server's domain
tf connector add (now configured in dashboard) generic \
--domains mcp-server.example.com \
--token-env MCP_AUTH_TOKEN
Step 3: Set permissions
tf permissions set --gateway "my-agent" \
--connector generic \
--action "http.*" \
--decision allow
Step 4: Run your agent through the proxy
tf run -- "my-agent" python mcp_agent.py
Every MCP tool call now passes through TameFlare. Every request is logged. Credentials are injected from the vault. The kill switch works instantly.
What about competitors?
Several competitors have built MCP-native proxies:
| Tool | Approach | Trade-off |
|---|---|---|
| Agentgateway (Solo.io) | MCP-native Rust proxy | Only handles MCP/A2A traffic |
| Archestra | MCP platform with security | Requires MCP-specific infrastructure |
| GuardionAI | MCP-focused security | Early-stage, narrow scope |
| TameFlare | General HTTP proxy | Intercepts ALL HTTP traffic including MCP |
The MCP connector (coming soon)
Today, TameFlare treats MCP traffic as generic HTTP. That means permissions are domain-level (allow/deny mcp-server.example.com), not tool-level.
We're building an MCP connector that will parse JSON-RPC tools/call messages into structured TameFlare actions:
params.name as the action type (e.g., mcp.tools.create_pull_request)params.arguments as action parametersread_file but deny delete_repository"This will bring MCP traffic to the same level of granularity as our GitHub, OpenAI, and Stripe connectors.
What about stdio MCP?
stdio transport is process-level IPC (stdin/stdout), not network traffic. TameFlare's proxy cannot intercept it. However:
Getting started
- Create a free account - 3 gateways, 1,000 actions/month
- Install the CLI:
npm install -g @tameflare/cli - Add connectors for your MCP server domains
- Run your agent through the proxy:
tf run -- "my-agent" python agent.py - Check the traffic log - you'll see every MCP tool call
Related articles
How to Secure AI Agent API Calls with a Policy Gateway
AI agents make HTTP calls on your behalf. Without a policy layer, a single misconfigured agent can delete production data, leak secrets, or rack up API bills. Here's how to add a security boundary.
OpenClaw Proves Agentic AI Works. Here's How to Secure It.
OpenClaw has 100k+ stars and zero built-in security. Every outbound HTTP call runs with full user permissions. Here's how to add a policy enforcement layer without changing your agent code.
AI Agent IAM: Identity and Access Management for Autonomous Systems
Traditional IAM was built for humans and service accounts. Autonomous AI agents need a new model - one that combines identity, permissions, credential isolation, and real-time policy enforcement.