Using TameFlare with n8n: Secure AI Workflow Automation
n8n workflows call dozens of APIs with full credentials. Route all n8n HTTP traffic through TameFlare to enforce policies, isolate credentials, and create an audit trail - without modifying any workflow.
Why n8n workflows need governance
n8n is a powerful workflow automation platform with 400+ integrations. Teams use it to build AI agent workflows that connect LLMs to external services - GitHub, Slack, databases, payment APIs, and more.
But n8n workflows run with the full permissions of the credentials you provide. There is no built-in policy layer that says "this workflow can read Stripe data but cannot create charges" or "this workflow can post to Slack but needs approval for channel-wide messages."
The risks:
How TameFlare works with n8n
TameFlare sits between n8n and the APIs it calls. Every outbound HTTP request from n8n passes through the TameFlare proxy, which enforces your policies.
Running n8n through TameFlare
Start n8n through the TameFlare proxy:
# Before: n8n runs with full API access
n8n start
# After: n8n runs through TameFlare proxy
tf run -- "n8n-prod" n8n start
All HTTP traffic from the n8n process (including all node executions) is now routed through the proxy.
n8n via Docker
For Docker deployments, set the proxy environment variables:
# docker-compose.yml
services:
n8n:
image: n8nio/n8n
environment:
- HTTP_PROXY=http://host.docker.internal:9443
- HTTPS_PROXY=http://host.docker.internal:9443
- NODE_TLS_REJECT_UNAUTHORIZED=0 # Trust TameFlare's CA
volumes:
- n8n_data:/home/node/.n8n
tameflare-gateway:
image: ghcr.io/tameflare/gateway:latest
ports:
- "9443:9443"
volumes:
- ./tameflare:/data
Step-by-step setup
1. Install TameFlare and add connectors
tf init
tf connector add (now configured in dashboard) github --token-env GITHUB_TOKEN
tf connector add (now configured in dashboard) slack --token-env SLACK_BOT_TOKEN
tf connector add (now configured in dashboard) openai --token-env OPENAI_API_KEY
tf connector add (now configured in dashboard) generic --domains api.stripe.com --token-env STRIPE_KEY
2. Set permissions for n8n
# Allow n8n to call OpenAI (for AI Agent / LLM Chain nodes)
tf permissions set --gateway "n8n-prod" --connector openai \
--action "*" --decision allow
# Allow n8n to read GitHub (issues, PRs, repos)
tf permissions set --gateway "n8n-prod" --connector github \
--action "github.issue.get" --decision allow
tf permissions set --gateway "n8n-prod" --connector github \
--action "github.pr.get" --decision allow
# Require approval for GitHub writes
tf permissions set --gateway "n8n-prod" --connector github \
--action "github.pr.create" --decision require_approval
tf permissions set --gateway "n8n-prod" --connector github \
--action "github.pr.merge" --decision require_approval
# Block destructive GitHub actions
tf permissions set --gateway "n8n-prod" --connector github \
--action "github.branch.delete" --decision deny
tf permissions set --gateway "n8n-prod" --connector github \
--action "github.repo.delete" --decision deny
# Allow Slack messages
tf permissions set --gateway "n8n-prod" --connector slack \
--action "slack.chat.postMessage" --decision allow
3. Start n8n through the proxy
tf run -- "n8n-prod" n8n start
4. Test a workflow
Open n8n at http://localhost:5678 and run any workflow. Check the TameFlare dashboard or CLI:
tf logs --gateway "n8n-prod"
# 14:32:01 | n8n-prod | openai.chat.create | ALLOW | 342ms
# 14:32:03 | n8n-prod | github.issue.create | HOLD | waiting...
# 14:32:15 | n8n-prod | github.issue.create | ALLOW | 89ms (approved)
# 14:32:18 | n8n-prod | github.branch.delete | DENY | 1ms
What gets governed
Every n8n node that makes HTTP calls is governed by TameFlare:
| n8n Node | TameFlare Connector | Example Actions |
|---|---|---|
| GitHub node | github | github.issue.create, github.pr.merge |
| Slack node | slack | slack.chat.postMessage, slack.files.upload |
| OpenAI node | openai | openai.chat.create, openai.embedding.create |
| Stripe node | generic (api.stripe.com) | generic.post, generic.get |
| HTTP Request node | generic | Matched by domain |
| AI Agent node | openai + tools | LLM calls + tool execution |
| Webhook node | Not proxied | Inbound webhooks bypass the proxy |
Credential isolation
A key benefit: you can remove API credentials from n8n's credential store and let TameFlare inject them instead.
Before: n8n stores your GitHub token, Stripe key, and Slack token in its encrypted credential store. Any workflow can use any credential. After: Move credentials to TameFlare's vault. n8n workflows make HTTP calls without auth headers - the TameFlare proxy injects credentials from the vault into requests that pass policy evaluation.This means:
Emergency controls
If an n8n workflow enters a loop or starts making unexpected API calls:
# Kill switch: block ALL traffic from n8n immediately
tf kill-switch (now via dashboard) --enable --scope "n8n-prod"
# Or block a specific connector
tf kill-switch (now via dashboard) --enable --scope github
# Deactivate when resolved
tf kill-switch (now via dashboard) --disable --scope "n8n-prod"
The kill switch takes effect immediately. No n8n restart required.
Per-workflow governance (advanced)
For stricter isolation, run different n8n instances through separate gateways:
# Production workflows: strict policies
tf run -- "n8n-production" n8n start --port 5678
# Development workflows: permissive policies
tf run -- "n8n-development" n8n start --port 5679
Each gateway has its own permissions, audit trail, and kill switch.
Getting started
- Create a free account - 3 gateways, 1,000 actions/month
- Install the CLI:
npm install -g @tameflare/cli - Add connectors for the APIs your n8n workflows use
- Start n8n through the proxy:
tf run -- "n8n" n8n start - Monitor traffic in the dashboard
Related articles
How to Secure AI Agent API Calls with a Policy Gateway
AI agents make HTTP calls on your behalf. Without a policy layer, a single misconfigured agent can delete production data, leak secrets, or rack up API bills. Here's how to add a security boundary.
Using TameFlare with LangChain: Zero-Code Agent Governance
LangChain agents call external APIs with zero built-in security. Add policy enforcement, credential isolation, and audit logging without changing a single line of agent code.
Building a Custom TameFlare Connector in Go
TameFlare ships with 8 built-in connectors, but your agents probably call APIs we haven't covered yet. This guide walks through building a custom connector from scratch - domain matching, request parsing, credential injection, and registration.