Affiliate/Ads disclaimer: Some links on this blog are affiliate/ads links to support this project going on, meaning I may earn a commission at no extra cost to you.
n8n Architecture: Execution Engine, Queue Mode & Scaling Science
n8n’s execution engine processes workflows sequentially through a main process or distributed via queue mode with Redis as the central broker. This architecture supports horizontal scaling from a single SQLite instance to hundreds of workers backed by PostgreSQL. Below you’ll find the internals, scaling controls, and platform comparisons that help you choose the right production setup.
How does n8n’s execution engine process workflows?
The n8n execution engine processes nodes sequentially from the trigger onward, passing a JSON context between steps. It allocates memory per execution, records input/output snapshots, and enforces a configurable timeout (default 300 seconds). On failure it stores a full stack trace and triggers the linked error workflow. [1]
A deeper dive into how the engine spawns processes and manages memory is available in our engine, queue mode & memory guide. The engine’s design is also the foundation for every workflow type.
How does n8n queue mode distribute workflow executions across workers?
Queue mode replaces the built‑in execution with a Redis‑backed queue. The main n8n instance enqueues jobs; dedicated worker processes dequeue and run them independently. Each worker runs the full execution engine in its own Node.js process, isolated from the main instance, enabling parallel execution. [1]
This separation prevents a single stuck workflow from blocking others. You can add workers on separate machines, all connected to the same Redis and database. For configuration details see scaling, concurrency & queue configuration.
What is the minimum Redis version for n8n queue mode?
n8n queue mode requires Redis 6.0 or higher to
support the BullMQ job queue library, which relies on atomic Lua
scripting available only from Redis 6.0 onward. You must also
set QUEUE_BULL_REDIS_HOST and port env variables.
The database remains PostgreSQL for production setups.
[1]
Using an older Redis version will cause connection failures. Self‑hosted admins should pin the Redis version in their Docker Compose file to avoid accidental downgrades.
How do you configure n8n worker concurrency and memory limits?
Worker concurrency is controlled by the environment variable
N8N_CONCURRENCY_PRODUCTION_LIMIT (default 10).
Each concurrent execution consumes additional memory; n8n
recommends at least 256 MB of RAM per execution slot. You can
also limit memory per Node.js process with --max-old-space-size.
[2]
For large‑scale deployments, pair this configuration with the scaling & concurrent worker guide and benchmark your workload to tune the limit.
What database options does n8n support for production and development?
n8n supports SQLite (default for local/desktop) and PostgreSQL for production workloads. SQLite works for single‑user testing but cannot handle concurrent writes under queue mode. PostgreSQL is required for multi‑worker environments and is recommended for any self‑hosted or cloud deployment above 5,000 exec/month. [3]
The selection directly impacts scaling capabilities; explore n8n vs Zapier vs Make to see how database sovereignty compares to SaaS competitors.
How can you scale n8n horizontally with queue mode and PostgreSQL?
Horizontal scaling is achieved by deploying multiple worker instances behind a shared Redis broker and a single PostgreSQL database. The main n8n process serves the editor and API; it enqueues all executions to Redis. Workers on separate VMs pull jobs and execute them, scaling linearly with CPU cores. [1]
To prevent overloading, set N8N_CONCURRENCY_PRODUCTION_LIMIT
per worker, and monitor Redis memory. For a full production
blueprint, read our
detailed scaling configuration guide.

