Database & Storage

TameFlare uses SQLite for all persistent storage. This page covers what is stored, where, and why.


Why SQLite

  • Zero dependencies — no database server to install, configure, or maintain
  • Single-file — easy to back up (cp local.db backup.db)
  • Crash-safe — WAL (Write-Ahead Logging) mode prevents corruption on power loss
  • Fast enough — handles ~5,000-10,000 writes/s, sufficient for most TameFlare deployments
  • Portable — same file works on Linux, macOS, Windows

Why not Postgres / MySQL?

TameFlare is designed for self-hosted simplicity. Adding a database server increases operational complexity (installation, configuration, backups, upgrades, monitoring) without meaningful benefit for the typical deployment (< 1,000 req/s, single node).

For teams that need cloud-hosted or multi-instance access, TameFlare supports Turso (hosted libSQL, wire-compatible with SQLite).


Database files

Control plane (apps/web/local.db)

| Table | Contents | Rows (typical) | |---|---|---| | organizations | Org name, plan, settings (JSON) | 1 | | users | Email, bcrypt password hash, role, org ID | 1-50 | | sessions | Session tokens, user ID, expiry | 1-100 | | agents | Name, status, API key hash, prefix, org ID | 1-100 | | policies | Name, YAML content, priority, enabled flag | 1-50 | | policy_versions | Version history for each policy | 1-500 | | action_requests | Action specs, decisions, risk scores | Grows with usage | | audit_events | Immutable event log (append-only) | Grows with usage | | used_nonces | Replay protection for decision tokens | Grows, pruned by cleanup job | | approval_requests | Pending and resolved approvals | Grows with usage | | webhook_deliveries | Webhook attempt logs | Grows with usage | | gateways | Gateway configurations | 1-10 | | traffic_logs | Proxy traffic records (v2) | Grows with usage | | proxy_agents | Agent-to-port mappings | 1-100 | | proxy_permissions | Per-agent, per-connector access rules | 1-500 | | connector_configs | Connector settings per gateway | 1-50 | | waitlist | Email waitlist signups | 0-1000 |

Gateway (.TameFlare/gateway.db)

| Table | Contents | Rows (typical) | |---|---|---| | traffic_log | Every proxied request: URL, method, agent, connector, action, decision, latency, status | Grows with usage | | permissions | Cached permission rules (synced from control plane) | 1-500 | | approvals | Pending proxy-mode approvals | 0-50 |


Data growth

| Usage level | Control plane DB size | Gateway DB size | Notes | |---|---|---|---| | Light (< 100 actions/day) | < 10 MB | < 50 MB | Years of data fits in a small file | | Medium (1,000 actions/day) | ~50-100 MB/year | ~200-500 MB/year | Consider audit retention | | Heavy (10,000+ actions/day) | ~500 MB/year | ~2-5 GB/year | Set AUDIT_RETENTION_DAYS, prune traffic logs |

Pruning

Use the maintenance endpoint to clean up old data:

# Prune expired nonces, old sessions, and audit events beyond retention
curl -X POST \
  -H "Authorization: Bearer $MAINTENANCE_SECRET" \
  http://localhost:3000/api/maintenance/cleanup

Set AUDIT_RETENTION_DAYS to automatically purge audit events older than N days. 0 = keep forever.


Turso (cloud SQLite)

For cloud deployments or multi-instance access, use Turso (hosted libSQL):

# .env.local
TURSO_DATABASE_URL=libsql://your-db-name.turso.io
TURSO_AUTH_TOKEN=your-auth-token

When both variables are set, TameFlare connects to Turso instead of local SQLite. The schema is identical — pnpm db:push works with both.

Turso benefits

  • Multi-instance — multiple control plane instances can share one database
  • Edge replicas — low-latency reads from global edge locations
  • Managed backups — automatic point-in-time recovery
  • No file management — no .db files to back up or protect

Turso limitations

  • Gateway still uses local SQLite — the Go gateway always uses .TameFlare/gateway.db for traffic logs and permissions. Turso is only for the control plane.
  • Network dependency — control plane requires network access to Turso. Local SQLite works offline.

Schema management

TameFlare uses Drizzle ORM for schema definition and migrations.

# Apply schema changes (safe — additive only)
pnpm db:push
 
# View current schema
cat apps/web/src/lib/db/schema.ts

pnpm db:push compares the Drizzle schema definition against the database and applies only additive changes. It never drops tables or columns.


Data residency

All data is stored locally unless you explicitly configure Turso or external integrations:

| Data | Location | Leaves your server? | |---|---|---| | Audit events | local.db | No (unless you export CSV) | | Traffic logs | gateway.db | No | | Agent API keys (hashed) | local.db | No | | Connector credentials (encrypted) | .TameFlare/credentials.enc | No | | Policies | local.db | No | | Slack notifications | Sent to Slack API | Yes (if Slack integration enabled) | | Webhook callbacks | Sent to configured URL | Yes (if webhook_url set) |


Next steps