Source-available policy enforcement gateway

v0.8 · Active developmentStar on GitHub

Your agent can only access
what you explicitly allow.

A transparent proxy that sits between your AI agents and the APIs they call. Every request is intercepted, parsed, permission-checked, and logged — the agent never sees real API keys and can't bypass the gateway. Works with any process that makes HTTP calls — Python scripts, LangChain agents, n8n workflows, shell scripts.

Requires local install · zero code changes · zero telemetry
AI Agentno credentials
HTTP_PROXY
TF PROXY
Connector Parse Permission Check Credential Vault Traffic Log
+ credentials
GitHub
OpenAI
Stripe
Slack
unmodified APIs

All agent HTTP traffic is routed through the proxy. Connectors parse requests into actions. The agent never sees real API keys.

The problem

AI agents are shipping to production. They merge code, send emails, transfer money, and provision infrastructure — often without a human in the loop. That creates real risk.

No guardrails

Most agents have few or no runtime guardrails beyond prompt instructions. A bad prompt or hallucination can cause irreversible damage.

No visibility

No record of what the agent did, why, or who approved it. Debugging and compliance become impossible.

No kill switch

If an agent goes rogue, there's no centralized way to halt all activity instantly. You're revoking keys one service at a time.

A locked gate, not a suggestion

TameFlare doesn't add rules to your agent's prompt and hope it follows them. It moves your agent's API keys into the Gateway so the agent literally cannot call external tools without TameFlare's approval. The enforcement is architectural, not behavioral.

Without TameFlare

Agenthas all API keys
GitHub
Stripe
OpenAI
Slack

Agent has direct API access. No enforcement possible.

With TameFlare

Agentno credentials
TF Proxyparse + check + inject
GitHub
Stripe
OpenAI
Slack

Proxy intercepts all traffic. Credentials injected only on allow. Agent never sees keys.

Three decisions, one flow

Every action your agent requests goes through the policy engine. The result is always one of three outcomes.

HTTP Request
Connector
Permission Check
allow

Inject Credentials

from vault

Forward Request

to upstream API

approval

Hold Connection

up to 5 min

Human Approves

CLI or dashboard

deny

Return 403

request never sent

Allow

Safe actions get a tamper-proof, single-use ES256 decision token and execute immediately via the Gateway.

Requires approval

Risky actions notify a human via Slack or the dashboard. A human approves or denies before the agent can proceed.

Deny

Blocked actions return a clear reason. No token issued. The agent cannot proceed. TameFlare uses a fail-closed model — if in doubt, deny.

Three steps to governed traffic

Install the CLI, set up a gateway in the dashboard, then wrap your agent. Zero code changes. Under five minutes.

1

Install the CLI

Terminal
$ npm install -g @tameflare/cli# Or use npx — no install needed:$ npx tf init

Sets up your local config and connects to the TameFlare control plane.

2

Set up a gateway

Dashboard
1. Name your gateway
2. Add connectors (GitHub, Slack, Stripe...)
3. Store API keys in the encrypted vault
4. Set permissions per process

The 4-step wizard at /dashboard/gateways handles everything.

3

Run your process

Terminal
$ tf run --gateway "my-gateway" \    python agent.py# All HTTP traffic is now proxied.# Agent never sees real API keys.# Every call is logged and governed.

Works with Python, Node.js, Go, Rust, shell scripts — anything that makes HTTP calls.

Built-in connectors:GitHubStripeSlackOpenAI / AnthropicGeneric HTTP (any API)

Works with LangChain, CrewAI, n8n, Claude Code, and any agent that makes HTTP calls.

Proxy enforcement — zero code changes

Wrap any process with tf run. All outbound HTTP traffic is routed through the gateway. Connectors parse requests into structured actions. The process never sees real API keys.

Transparent proxy enforcement

The gateway intercepts every HTTP/HTTPS call, matches it against connector rules, evaluates policies, and injects credentials from the encrypted vault into approved requests. Works with Python, Node.js, Go, Rust, shell scripts — anything that makes HTTP calls.

No SDK required. No code changes. The process doesn't know it's being governed.

Everything you need to govern agent behavior

From connector setup to emergency shutdown.

Connector system

GitHub connector parses 20+ action types from raw HTTP. Generic connector works with any API. Add connectors via CLI or dashboard.

Per-gateway permissions

Deny-all by default. Explicitly allow each process to use specific connectors and actions. Wildcard patterns supported.

Human-in-the-loop

Set actions to require_approval. The proxy holds the connection until a human approves via CLI or dashboard.

Credential vault

API keys stored in an AES-256-GCM encrypted vault. Injected by the proxy at request time. Agents never see real credentials.

Scoped kill switch

Block all traffic, a specific connector, or a single process. Instant effect, no requests forwarded.

Live traffic log

Every proxied request logged with agent, action, decision, latency. Filterable dashboard with auto-refresh.

Connectors parse every API call into a structured action

7 built-in connectors: GitHub (20+ actions), OpenAI (24+ actions), Anthropic, Stripe (40+ actions), Slack (35+ actions), generic HTTP, and webhook. Permissions match on gateway, connector, and action pattern.

Source control

Block branch deletion, require merge approval

Payments

Limit transfer amounts, block sanctioned currencies

Infrastructure

Block prod DB drops, limit cluster scaling

Communications

Review external emails, block mass sends

Data access

Approve PII queries, block bulk deletes

Agent orchestration

Control sub-agent spawning and delegation

Built for teams running AI agents

Different roles, same need: control without friction.

Engineering leads

Control what your agents can do without slowing down development. Configure policies in the dashboard, scope them per gateway, and deploy with confidence.

Security teams

Cryptographic enforcement, not behavioral suggestions. ES256 tokens, nonce replay protection, and a fail-closed deny-wins model.

Compliance

Immutable audit trail of every agent action. Who requested it, what policy matched, who approved it, and what happened. Exportable to CSV.

What TameFlare is not

  • Not a prompt engineering tool — TameFlare enforces at the API layer, not the LLM layer.
  • Not a monitoring/observability platform — TameFlare monitors its own activity (actions, decisions, approvals), not your app metrics, infrastructure, or APM. Not a replacement for Datadog or Grafana.
  • Not an agent framework — it works with any framework (LangChain, CrewAI, custom, etc.).
  • Not a cloud service — you self-host everything on your own infrastructure. No managed/hosted version is planned.
  • Not sending telemetry — self-hosted instances make zero outbound calls. No analytics, no phone-home, no license checks. Optional Sentry/PostHog can be enabled by you if desired.

Security model

HTTPS interception

TLS termination with a per-installation ECDSA CA. Same model as corporate proxies (Zscaler, mitmproxy). Your CA key never leaves your machine.

Credential isolation

Agents never see real API keys. The proxy injects credentials into allowed requests at request time from an AES-256-GCM encrypted vault.

Fail-closed

No connector = no access. Error in evaluation = deny. No fail-open mode. All data stays on your infrastructure with zero telemetry.

Read the full security model
Star on GitHubLaunched 2025 · Elastic License v2 · Active developmentTameFlare vs alternatives

Not ready to install? Get notified about releases and guides.

Start governing agent traffic today

Self-host with Docker or Node.js. Four commands to your first governed gateway. Or sign in to the dashboard if you already have an account.