Proxy Behavior
Technical reference for how the TameFlare gateway proxy handles HTTP traffic, including headers, streaming, protocol support, domain exclusions, and upstream proxy chaining.
Request lifecycle
When the proxy receives an HTTP request from an agent:
- Domain lookup — match the request domain to a configured connector
- Action parsing — connector parses the HTTP method + URL path into a structured action (e.g.,
github.pr.merge) - Permission check — look up the agent's permissions for this connector + action
- Decision — allow, deny, or hold for approval
- Credential injection — if allowed, add the connector's API key to the request headers
- Forward — send the request to the real upstream API
- Response — forward the upstream response back to the agent
- Log — record the request in the traffic log (URL, method, action, decision, latency, status)
Headers
Headers added by the proxy
| Header | Value | When |
|---|---|---|
| Authorization | Connector's API key (e.g., Bearer ghp_xxx) | On allowed requests — replaces any existing auth header |
| X-TameFlare-Agent | Agent name | Always (for upstream logging) |
| X-TameFlare-Request-Id | Unique request ID | Always |
Headers stripped by the proxy
| Header | Reason |
|---|---|
| Proxy-Authorization | Proxy auth header — not forwarded to upstream |
| Proxy-Connection | Proxy-specific — not forwarded |
Headers preserved
All other request and response headers are forwarded unchanged, including:
Content-Type,Accept,User-AgentCookie(if the upstream API uses cookies)- Custom headers set by the agent
- All response headers from the upstream API
Headers NOT logged
The proxy logs selected headers (Host, Content-Type, User-Agent) but does not log:
Authorization(contains API keys)Cookie(session data)X-API-Keyor similar auth headers
Streaming support
Server-Sent Events (SSE)
SSE streaming (used by OpenAI, Anthropic for chat completions) is fully supported. The proxy forwards the Transfer-Encoding: chunked response as-is. The agent receives chunks in real time.
# This works through the proxy
curl -N https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4", "messages": [{"role": "user", "content": "Hello"}], "stream": true}'Chunked transfer encoding
Chunked responses are forwarded without buffering. The proxy streams data as it arrives from the upstream API.
Large responses
Responses are not buffered in memory. The proxy streams directly from upstream to agent. There is no response size limit imposed by TameFlare.
Protocol support
Supported
| Protocol | Status | Notes | |---|---|---| | HTTP/1.1 | Full support | Standard proxy behavior | | HTTPS | Full support | TLS termination with local CA | | HTTP/2 | Partial | Downgraded to HTTP/1.1 between proxy and upstream. Agent-to-proxy can use HTTP/2 if the client supports it. |
Not supported
| Protocol | Status | Workaround |
|---|---|---|
| WebSocket | Not supported | WebSocket Upgrade requests are rejected. Use HTTP polling or a separate non-proxied connection for WebSocket. |
| gRPC | Not supported | gRPC uses HTTP/2 with binary framing. Use the gRPC service directly (outside the proxy) or use a REST gateway. |
| UDP | Not supported | Proxy is TCP-only. |
| Raw TCP | Not supported | Proxy only handles HTTP/HTTPS. |
Rationale
TameFlare focuses on HTTP/HTTPS because that covers the vast majority of API calls agents make (REST APIs, OpenAI, GitHub, Stripe, Slack, etc.). Supporting additional protocols would add complexity without covering significant use cases.
NO_PROXY (see below) and handle them directly.Domain exclusions (NO_PROXY)
Use NO_PROXY to exclude specific domains or IP ranges from proxy interception. Requests to excluded domains go directly from the agent to the destination, bypassing TameFlare entirely.
Configuration
In .TameFlare/config.yaml:
no_proxy:
- localhost
- 127.0.0.1
- "*.internal.company.com"
- "10.0.0.0/8"Or via environment variable:
NO_PROXY=localhost,127.0.0.1,*.internal.company.com,10.0.0.0/8tf run sets NO_PROXY automatically based on your config. The default includes localhost and 127.0.0.1.
When to exclude domains
| Scenario | Action |
|---|---|
| Internal microservices | Add *.internal.company.com to NO_PROXY |
| Database connections over HTTP | Add the database host to NO_PROXY |
| WebSocket endpoints | Add the WebSocket domain to NO_PROXY |
| gRPC services | Add the gRPC host to NO_PROXY |
| Health check endpoints | Add health check URLs to NO_PROXY |
Upstream proxy chaining
If your organization uses a corporate proxy (e.g., Zscaler, Squid), TameFlare can chain through it.
Current status
Upstream proxy chaining is not yet supported. The gateway connects directly to upstream APIs.
Workaround
If you must use a corporate proxy:
- Set the corporate proxy as the agent's
HTTP_PROXY(instead of TameFlare) - Use TameFlare in SDK mode (Mode B) for policy enforcement
- The agent checks policies via the TameFlare API, then makes requests through the corporate proxy
Planned
We plan to add an upstream_proxy configuration option:
# .TameFlare/config.yaml (planned, not yet implemented)
upstream_proxy: http://corporate-proxy.company.com:8080
upstream_proxy_no_proxy: ["*.internal.company.com"]This would make TameFlare chain: Agent → TameFlare proxy → Corporate proxy → Upstream API.
Connection handling
Timeouts
| Timeout | Default | Configurable? | |---|---|---| | Agent → proxy connection | 30s | No | | Proxy → upstream connection | 30s | No | | Upstream response (first byte) | 30s | No | | Upstream response (full body) | No limit | — | | Approval hold | 5 minutes | No |
Connection pooling
The proxy maintains a connection pool to frequently-accessed upstream APIs. Connections are reused for subsequent requests to the same domain, reducing TLS handshake overhead.
Keep-alive
HTTP keep-alive is supported between the agent and proxy, and between the proxy and upstream. Long-lived connections are maintained until either side closes them.
Rate limiting
The gateway enforces rate limits per agent:
| Mode | Limit | Scope | |---|---|---| | Proxy (Mode A) | 120 requests/minute | Per agent | | SDK (Mode B) | 60 requests/minute | Per agent API key |
When the limit is exceeded:
- Proxy returns
429 Too Many RequestswithRetry-Afterheader - The Node.js SDK auto-retries with exponential backoff
- Rate limit state is in-memory — resets on gateway restart
There is no request queuing. Excess requests are immediately rejected.
Next steps
- Connectors — per-connector reference and custom connector development
- Security — HTTPS interception and proxy bypass analysis
- Architecture — deployment topologies and network requirements
- Performance — proxy latency benchmarks