A transparent proxy that sits between your AI agents and the APIs they call. Every request is intercepted, parsed, permission-checked, and logged — the agent never sees real API keys and can't bypass the gateway. Works with any process that makes HTTP calls — Python scripts, LangChain agents, n8n workflows, shell scripts.
All agent HTTP traffic is routed through the proxy. Connectors parse requests into actions. The agent never sees real API keys.
AI agents are shipping to production. They merge code, send emails, transfer money, and provision infrastructure — often without a human in the loop. That creates real risk.
Most agents have few or no runtime guardrails beyond prompt instructions. A bad prompt or hallucination can cause irreversible damage.
No record of what the agent did, why, or who approved it. Debugging and compliance become impossible.
If an agent goes rogue, there's no centralized way to halt all activity instantly. You're revoking keys one service at a time.
TameFlare doesn't add rules to your agent's prompt and hope it follows them. It moves your agent's API keys into the Gateway so the agent literally cannot call external tools without TameFlare's approval. The enforcement is architectural, not behavioral.
Without TameFlare
Agent has direct API access. No enforcement possible.
With TameFlare
Proxy intercepts all traffic. Credentials injected only on allow. Agent never sees keys.
Every action your agent requests goes through the policy engine. The result is always one of three outcomes.
Inject Credentials
from vault
Forward Request
to upstream API
Hold Connection
up to 5 min
Human Approves
CLI or dashboard
Return 403
request never sent
Allow
Safe actions get a tamper-proof, single-use ES256 decision token and execute immediately via the Gateway.
Requires approval
Risky actions notify a human via Slack or the dashboard. A human approves or denies before the agent can proceed.
Deny
Blocked actions return a clear reason. No token issued. The agent cannot proceed. TameFlare uses a fail-closed model — if in doubt, deny.
Install the CLI, set up a gateway in the dashboard, then wrap your agent. Zero code changes. Under five minutes.
$ npm install -g @tameflare/cli# Or use npx — no install needed:$ npx tf initSets up your local config and connects to the TameFlare control plane.
The 4-step wizard at /dashboard/gateways handles everything.
$ tf run --gateway "my-gateway" \ python agent.py# All HTTP traffic is now proxied.# Agent never sees real API keys.# Every call is logged and governed.Works with Python, Node.js, Go, Rust, shell scripts — anything that makes HTTP calls.
Works with LangChain, CrewAI, n8n, Claude Code, and any agent that makes HTTP calls.
Wrap any process with tf run. All outbound HTTP traffic is routed through the gateway. Connectors parse requests into structured actions. The process never sees real API keys.
The gateway intercepts every HTTP/HTTPS call, matches it against connector rules, evaluates policies, and injects credentials from the encrypted vault into approved requests. Works with Python, Node.js, Go, Rust, shell scripts — anything that makes HTTP calls.
No SDK required. No code changes. The process doesn't know it's being governed.
From connector setup to emergency shutdown.
GitHub connector parses 20+ action types from raw HTTP. Generic connector works with any API. Add connectors via CLI or dashboard.
Deny-all by default. Explicitly allow each process to use specific connectors and actions. Wildcard patterns supported.
Set actions to require_approval. The proxy holds the connection until a human approves via CLI or dashboard.
API keys stored in an AES-256-GCM encrypted vault. Injected by the proxy at request time. Agents never see real credentials.
Block all traffic, a specific connector, or a single process. Instant effect, no requests forwarded.
Every proxied request logged with agent, action, decision, latency. Filterable dashboard with auto-refresh.
7 built-in connectors: GitHub (20+ actions), OpenAI (24+ actions), Anthropic, Stripe (40+ actions), Slack (35+ actions), generic HTTP, and webhook. Permissions match on gateway, connector, and action pattern.
Source control
Block branch deletion, require merge approval
Payments
Limit transfer amounts, block sanctioned currencies
Infrastructure
Block prod DB drops, limit cluster scaling
Communications
Review external emails, block mass sends
Data access
Approve PII queries, block bulk deletes
Agent orchestration
Control sub-agent spawning and delegation
Different roles, same need: control without friction.
Control what your agents can do without slowing down development. Configure policies in the dashboard, scope them per gateway, and deploy with confidence.
Cryptographic enforcement, not behavioral suggestions. ES256 tokens, nonce replay protection, and a fail-closed deny-wins model.
Immutable audit trail of every agent action. Who requested it, what policy matched, who approved it, and what happened. Exportable to CSV.
HTTPS interception
TLS termination with a per-installation ECDSA CA. Same model as corporate proxies (Zscaler, mitmproxy). Your CA key never leaves your machine.
Credential isolation
Agents never see real API keys. The proxy injects credentials into allowed requests at request time from an AES-256-GCM encrypted vault.
Fail-closed
No connector = no access. Error in evaluation = deny. No fail-open mode. All data stays on your infrastructure with zero telemetry.
Not ready to install? Get notified about releases and guides.
Self-host with Docker or Node.js. Four commands to your first governed gateway. Or sign in to the dashboard if you already have an account.