All articles
integrationn8ntutorial2026-02-079 min read

Using TameFlare with n8n: Secure AI Workflow Automation

n8n workflows call dozens of APIs with full credentials. Route all n8n HTTP traffic through TameFlare to enforce policies, isolate credentials, and create an audit trail - without modifying any workflow.

Why n8n workflows need governance

n8n is a powerful workflow automation platform with 400+ integrations. Teams use it to build AI agent workflows that connect LLMs to external services - GitHub, Slack, databases, payment APIs, and more.

But n8n workflows run with the full permissions of the credentials you provide. There is no built-in policy layer that says "this workflow can read Stripe data but cannot create charges" or "this workflow can post to Slack but needs approval for channel-wide messages."

The risks:

  • Over-permissioned credentials. Most n8n nodes require API keys with broad scopes. A workflow that only needs to read GitHub issues has a token that can also delete repos.
  • No action-level audit trail. n8n logs workflow executions, but doesn't record individual API calls with policy decisions.
  • No emergency stop. If a workflow enters a loop making hundreds of API calls, you have to find and disable the workflow manually.
  • AI nodes are non-deterministic. n8n's AI Agent node, LLM Chain node, and Tool Agent can make different API calls on each run.
  • How TameFlare works with n8n

    TameFlare sits between n8n and the APIs it calls. Every outbound HTTP request from n8n passes through the TameFlare proxy, which enforces your policies.

    Running n8n through TameFlare

    Start n8n through the TameFlare proxy:

    # Before: n8n runs with full API access
    n8n start
    
    # After: n8n runs through TameFlare proxy
    tf run -- "n8n-prod" n8n start
    

    All HTTP traffic from the n8n process (including all node executions) is now routed through the proxy.

    n8n via Docker

    For Docker deployments, set the proxy environment variables:

    # docker-compose.yml
    services:
      n8n:
        image: n8nio/n8n
        environment:
          - HTTP_PROXY=http://host.docker.internal:9443
          - HTTPS_PROXY=http://host.docker.internal:9443
          - NODE_TLS_REJECT_UNAUTHORIZED=0  # Trust TameFlare's CA
        volumes:
          - n8n_data:/home/node/.n8n
    
      tameflare-gateway:
        image: ghcr.io/tameflare/gateway:latest
        ports:
          - "9443:9443"
        volumes:
          - ./tameflare:/data
    

    Step-by-step setup

    1. Install TameFlare and add connectors

    tf init
    tf connector add (now configured in dashboard) github --token-env GITHUB_TOKEN
    tf connector add (now configured in dashboard) slack --token-env SLACK_BOT_TOKEN
    tf connector add (now configured in dashboard) openai --token-env OPENAI_API_KEY
    tf connector add (now configured in dashboard) generic --domains api.stripe.com --token-env STRIPE_KEY
    

    2. Set permissions for n8n

    # Allow n8n to call OpenAI (for AI Agent / LLM Chain nodes)
    tf permissions set --gateway "n8n-prod" --connector openai \
        --action "*" --decision allow
    
    # Allow n8n to read GitHub (issues, PRs, repos)
    tf permissions set --gateway "n8n-prod" --connector github \
        --action "github.issue.get" --decision allow
    tf permissions set --gateway "n8n-prod" --connector github \
        --action "github.pr.get" --decision allow
    
    # Require approval for GitHub writes
    tf permissions set --gateway "n8n-prod" --connector github \
        --action "github.pr.create" --decision require_approval
    tf permissions set --gateway "n8n-prod" --connector github \
        --action "github.pr.merge" --decision require_approval
    
    # Block destructive GitHub actions
    tf permissions set --gateway "n8n-prod" --connector github \
        --action "github.branch.delete" --decision deny
    tf permissions set --gateway "n8n-prod" --connector github \
        --action "github.repo.delete" --decision deny
    
    # Allow Slack messages
    tf permissions set --gateway "n8n-prod" --connector slack \
        --action "slack.chat.postMessage" --decision allow
    

    3. Start n8n through the proxy

    tf run -- "n8n-prod" n8n start
    

    4. Test a workflow

    Open n8n at http://localhost:5678 and run any workflow. Check the TameFlare dashboard or CLI:

    tf logs --gateway "n8n-prod"
    # 14:32:01 | n8n-prod | openai.chat.create  | ALLOW | 342ms
    # 14:32:03 | n8n-prod | github.issue.create | HOLD  | waiting...
    # 14:32:15 | n8n-prod | github.issue.create | ALLOW | 89ms (approved)
    # 14:32:18 | n8n-prod | github.branch.delete | DENY | 1ms
    

    What gets governed

    Every n8n node that makes HTTP calls is governed by TameFlare:

    n8n NodeTameFlare ConnectorExample Actions
    GitHub nodegithubgithub.issue.create, github.pr.merge
    Slack nodeslackslack.chat.postMessage, slack.files.upload
    OpenAI nodeopenaiopenai.chat.create, openai.embedding.create
    Stripe nodegeneric (api.stripe.com)generic.post, generic.get
    HTTP Request nodegenericMatched by domain
    AI Agent nodeopenai + toolsLLM calls + tool execution
    Webhook nodeNot proxiedInbound webhooks bypass the proxy

    Credential isolation

    A key benefit: you can remove API credentials from n8n's credential store and let TameFlare inject them instead.

    Before: n8n stores your GitHub token, Stripe key, and Slack token in its encrypted credential store. Any workflow can use any credential. After: Move credentials to TameFlare's vault. n8n workflows make HTTP calls without auth headers - the TameFlare proxy injects credentials from the vault into requests that pass policy evaluation.

    This means:

  • A compromised workflow cannot extract API keys
  • You control which gateways (and therefore which workflows) can use which credentials
  • Credential rotation happens in one place (TameFlare vault), not across every n8n credential
  • Emergency controls

    If an n8n workflow enters a loop or starts making unexpected API calls:

    # Kill switch: block ALL traffic from n8n immediately
    tf kill-switch (now via dashboard) --enable --scope "n8n-prod"
    
    # Or block a specific connector
    tf kill-switch (now via dashboard) --enable --scope github
    
    # Deactivate when resolved
    tf kill-switch (now via dashboard) --disable --scope "n8n-prod"
    

    The kill switch takes effect immediately. No n8n restart required.

    Per-workflow governance (advanced)

    For stricter isolation, run different n8n instances through separate gateways:

    # Production workflows: strict policies
    tf run -- "n8n-production" n8n start --port 5678
    
    # Development workflows: permissive policies
    tf run -- "n8n-development" n8n start --port 5679
    

    Each gateway has its own permissions, audit trail, and kill switch.

    Getting started

    1. Create a free account - 3 gateways, 1,000 actions/month
    2. Install the CLI: npm install -g @tameflare/cli
    3. Add connectors for the APIs your n8n workflows use
    4. Start n8n through the proxy: tf run -- "n8n" n8n start
    5. Monitor traffic in the dashboard
    Zero changes to your n8n workflows. Under 5 minutes to set up.