Affiliate/Ads disclaimer: Some links on this blog are affiliate/ads links to support this project going on, meaning I may earn a commission at no extra cost to you.
n8n Database: SQLite vs PostgreSQL for Workflow Storage
n8n default ships with SQLite, a file‑based database requiring zero setup. However, SQLite locks the entire file on every write, making it unsuitable for concurrent access: it’s designed for testing and single‑user instances where it handles <50 executions per day before performance degrades. PostgreSQL is the recommended production database, supporting row‑level locking, concurrent writes, hot backups, and queue‑mode horizontal scaling. This guide covers the architectural differences, concrete benchmarking data, configuration syntax, migration procedures, and data storage impacts for each database choice.
| Dimension | SQLite | PostgreSQL |
|---|---|---|
| Default | ✅ Yes — zero config, file‑based | ❌ No — requires separate server |
| Queue Mode | ❌ Not supported | ✅ Required |
| Concurrency | Single writer — file‑level lock | High — MVCC + row‑level locks |
| Ideal Scale | <50 execs/day, single user | 200+ execs/day, multi‑user, multi‑worker |
| Hot Backups | ❌ Inconsistent copy while running | ✅ pg_dump without downtime |
| Disk Reclamation | VACUUM required; space reused, not freed | Automatic via autovacuum |
Why does SQLite become a bottleneck under concurrent workflow execution?
SQLite serializes all writes through a single file‑level lock. Under 5 concurrent webhook triggers, writes queue sequentially; at 20 concurrent requests the average response time spikes to 1,200 ms with ~3% of requests timing out entirely. In contrast, PostgreSQL uses MVCC (Multi‑Version Concurrency Control) with row‑level locking, maintaining ~280 ms response time under identical load with zero failures.
The root cause is architectural: SQLite’s WAL mode allows concurrent reads but still serializes writes. PostgreSQL’s row‑level locking lets workers write to different tables or rows simultaneously without blocking each other. This matters in n8n because every workflow execution writes execution metadata, node progress updates, and result data — if two workflows trigger simultaneously, SQLite forces the second to wait until the first write completes. For detailed benchmarking methodology, see the PostgreSQL vs SQLite real‑world trade‑offs analysis.
Why is PostgreSQL mandatory for n8n queue mode and horizontal scaling with Redis?
Queue mode splits n8n into a main process, Redis message broker, and multiple worker processes. Workers read workflows from the database, execute them independently, and write results back — all concurrently. SQLite cannot support this because multiple processes writing to the same file creates corruption, even with WAL mode enabled.
The database serves as the shared source of truth across all workers in queue mode: worker processes pull workflow definitions, credentials, and execution metadata from it simultaneously. One worker writes execution history while another reads the next pending workflow. PostgreSQL handles this natively through MVCC; SQLite simply cannot. A dedicated server with 4+ CPU cores and 8GB+ RAM running PostgreSQL comfortably handles production volumes beyond ~200 executions per day, and many self‑hosting guides explicitly state that SQLite stops being viable at this threshold.
| Scale | Executions/Day | Recommended Database | Notes |
|---|---|---|---|
| Personal / Testing | <50 | SQLite | Zero setup; file‑based; no external service needed |
| Single‑Server Prod | 50–200 | PostgreSQL | Concurrent webhooks; hot backups; no UI lag |
| Queue Mode / Multi‑Worker | 200+ | PostgreSQL (mandatory) | Redis + workers; SQLite cannot handle multi‑process writes |
| Multi‑Main HA | 1,000+ | PostgreSQL HA cluster | Primary‑replica; PgBouncer; automated failover |
How do you configure n8n to use PostgreSQL via environment variables?
Set DB_TYPE=postgresdb — the correct value is
postgresdb (not “postgres” or “postgresql”). A common
mistake is using an incorrect string, which causes n8n to silently fall
back to SQLite without any error. Then configure
DB_POSTGRESDB_HOST, DB_POSTGRESDB_PORT
(default 5432), DB_POSTGRESDB_DATABASE, DB_POSTGRESDB_USER,
and DB_POSTGRESDB_PASSWORD.
For SSL‑encrypted connections, set DB_POSTGRESDB_SSL_ENABLED=true
and provide DB_POSTGRESDB_SSL_CA, DB_POSTGRESDB_SSL_CERT,
and DB_POSTGRESDB_SSL_KEY. For large instances, tune
DB_POSTGRESDB_POOL_SIZE — start at the default of 2,
increase to 10–20 for single‑instance production, and
20–50 for queue mode with multiple workers. Connection
timeout defaults to 20,000 ms via
DB_POSTGRESDB_CONNECTION_TIMEOUT.
| Variable | Required | Default | Example Production Value |
|---|---|---|---|
DB_TYPE |
✅ | sqlite |
postgresdb |
DB_POSTGRESDB_HOST |
✅ | localhost |
postgres (Docker service name) |
DB_POSTGRESDB_PORT |
❌ | 5432 |
5432 |
DB_POSTGRESDB_DATABASE |
❌ | n8n |
n8n_production |
DB_POSTGRESDB_USER |
✅ | postgres |
n8n_user |
DB_POSTGRESDB_PASSWORD |
✅ | — | strong_random_password |
DB_POSTGRESDB_POOL_SIZE |
❌ | 2 |
20 (production single‑instance) |
DB_POSTGRESDB_SSL_ENABLED |
❌ | false |
true (managed cloud databases) |
How do you migrate from SQLite to PostgreSQL without data loss?
Export all workflows as JSON files from the n8n editor first — this
serves as a clean, human‑readable backup. Then export the SQLite
database using sqlite3 database.sqlite .dump > dump.sql.
Because the schemas are not identical, manual fixes to the SQL dump
are often required. Create a fresh PostgreSQL database and start n8n
with DB_TYPE=postgresdb pointing to it.
The cleaner path: JSON‑export all workflows from the old instance, provision a fresh n8n instance connected to PostgreSQL, and import the workflows. Credentials must be re‑entered because encryption keys are instance‑specific. For execution history, back up the SQLite file for archival purposes rather than migrating it — execution logs for inactive automations are rarely worth the migration effort. A professional migration service can complete the entire process within 4–6 hours of focused work with a full snapshot, stack deployment, data porting, and webhook verification.
sqlite3 database.sqlite .dump > dump.sql. 3. Start
a new n8n instance with DB_TYPE=postgresdb. 4. Import
workflows. 5. Re‑enter credentials (encryption keys differ between
instances). 6. Verify webhook URLs resolve correctly.
Never delete the old SQLite file until the new setup has
been running successfully for at least 48 hours.
How do execution data pruning and disk reclamation differ between SQLite and PostgreSQL?
n8n enables execution pruning by default to prevent the database from growing indefinitely. Pruning deletes finished executions older than 336 hours (14 days) or when total executions exceed 10,000. The critical difference: SQLite marks space for reuse but does not release it to the OS — the database file grows but never shrinks. PostgreSQL’s autovacuum reclaims and releases disk space automatically.
To reclaim SQLite disk space, you must explicitly run
VACUUM, which rebuilds the entire database file — a
blocking operation that requires downtime on large files. Set
DB_SQLITE_VACUUM_ON_STARTUP=true to VACUUM automatically
on restart. For PostgreSQL, reduce storage with:
EXECUTIONS_DATA_SAVE_ON_SUCCESS=none (don’t save
successful execution data) and
EXECUTIONS_DATA_SAVE_ON_PROGRESS=false (skip node‑level
progress snapshots). For further scaling and queue‑mode database
strategies, see the
n8n Scaling: Concurrency & Queue Configuration guide.
How do you enable WAL mode to improve SQLite concurrent read performance in n8n?
Set DB_SQLITE_POOL_SIZE to a value greater than zero
(e.g., 4). n8n then opens the database in WAL (Write‑Ahead
Logging) mode instead of the default rollback journal mode. WAL allows
readers and one writer to coexist: multiple read operations (UI
browsing, execution history lookups) proceed while a single write
(execution progress update) is in progress.
The n8n v2.0 release improved SQLite performance significantly through a new SQLite pooling driver that can be up to 10x faster in benchmarks for read‑heavy workloads. However, this does not change the fundamental write‑serialization limitation — SQLite’s WAL mode still permits only one writer at a time. For deployments that need concurrent writes across multiple processes, PostgreSQL remains the only viable choice. n8n v2.0 also removed MySQL and MariaDB support entirely, consolidating around SQLite for small‑scale and PostgreSQL for production.
References
- LumaDock — PostgreSQL vs SQLite for n8n: Real World Trade‑offs (Dec 2025)
- DeepWiki — n8n Database Configuration: SQLite & PostgreSQL Setup, Pooling & SSL
- CSDN — Switching n8n from SQLite to PostgreSQL: Docker Benchmark & Migration (Feb 2026)
- Contabo — n8n Queue Mode Setup Guide for VPS Scalability (Feb 2026)
- Serverspace — How to Run n8n with PostgreSQL and Migrate from SQLite Safely (Mar 2026)
- n8n Official Docs — Database Environment Variables: DB_TYPE, PostgreSQL, SQLite
- n8n Official Docs — Execution Data: Pruning, Retention & Storage Management
- Contabo — n8n 2.0 is Now Available: PostgreSQL Recommendation & SQLite Pooling Driver (Dec 2025)

