n8n SplitInBatches Node: Loop & Iterate Over Items Guide

Affiliate/Ads disclaimer: Some links on this blog are affiliate/ads links to support this project going on, meaning I may earn a commission at no extra cost to you.


Published: April 13, 2026
Updated: May 7, 2026
n8n SplitInBatches Node: Loop & Iterate Over Items Guide
⚡ n8n Workflow Automation T3 · Loop & Iteration
n8n SplitInBatches Node: Loop & Iterate Over Items Guide

The SplitInBatches node (renamed to Loop Over Items in recent n8n versions) splits a large list of input items into smaller, sequentially processed chunks of a fixed size. It outputs one batch at a time through its loop port. After processing, you connect the downstream node back into SplitInBatches to fetch the next batch. When all items are exhausted, the node emits the combined results through its done port. This single‑execution looping mechanism prevents memory exhaustion, respects API rate limits, and turns massive datasets into manageable, recoverable units of work. [1]

How does the SplitInBatches node loop over items in a single execution?

The SplitInBatches node receives a full list, then on its first pass outputs only the first N items (the batch size) through the loop port. When subsequent nodes finish and the workflow loops back to the node, it outputs items N+1 to 2N, continuing until all items are consumed and the done port fires. [1]

The execution stays alive for the entire loop; n8n does not spawn new executions for each batch. This design keeps memory consumption predictable and enables you to process thousands of records in one continuous run without hitting execution‑timeout limits. For a deep dive into how the engine handles this, see our n8n architecture & scaling guide.

What batch size should you configure for the SplitInBatches node?

Set batch size based on the slowest downstream constraint. For API‑heavy work (one HTTP request per item), start with 10–50 items per batch. For lightweight data transforms without external calls, use 100–500. A batch size of 1 gives the finest granularity but produces the most loop iterations. [2]

Workload Type Recommended Batch Size Reason
API calls (rate‑limited) 1–10 Avoids 429 errors; pair with Wait node
API calls (generous limits) 50–100 Balances speed with reliability
Data transforms only 100–500 Reduces loop overhead
AI / LLM calls 1–5 Prevents prompt‑size and timeout issues

The official n8n node panel simply exposes a numeric “Batch Size” field. SplitInBatches also exposes a context object—for example, {{$node["SplitInBatches"].context["currentRunIndex"]}} returns the zero‑based iteration count, and {{$node["SplitInBatches"].context["noItemsLeft"]}} returns a boolean that becomes true once every item has been emitted[reference:0]. These context properties are available inside any node downstream of the loop port and are essential for building conditional exit logic when you cannot rely solely on the done port. [3]

How do you use SplitInBatches to avoid API rate limits and handle pagination?

Place a SplitInBatches node before the HTTP Request node and set the batch size equal to the API’s per‑second limit. Insert a Wait node between the HTTP Request and the loop‑back connection, pausing for 1–5 seconds after each batch so the upstream service never sees a burst. [4]

For pagination, enable the Reset option on the SplitInBatches node. This causes the node to treat each incoming data payload as a new independent set rather than a continuation of the previous items. Pair it with an IF node that checks whether a next‑page token exists; if the token is null, route to the stop path. Use {{$node["SplitInBatches"].context["noItemsLeft"]}} to detect when the batch is exhausted. This pattern works for cursor‑based and page‑number‑based pagination alike [5]. For web scraping workflows, the SplitInBatches node is critical to stay under rate limits, as described in web‑scraping architectural patterns [6]. For a broader view of production retry and alerting patterns, refer to n8n error workflow & retry guide.

How do you use the SplitInBatches context properties to control loop termination?

SplitInBatches exposes two context values: currentRunIndex (zero‑based iteration number) and noItemsLeft (boolean true when all items are exhausted). Access them with {{$node["SplitInBatches"].context["currentRunIndex"]}} inside any downstream node. An IF node can check noItemsLeft and break the loop early. [3]

A practical pattern: set the IF condition to {{$node["SplitInBatches"].context["currentRunIndex"] >= 5}} to process only the first five batches. The false branch routes back to SplitInBatches to continue looping; the true branch routes to a Set node that outputs “Loop Ended.” This gives you fine‑grained control beyond the default “process all items” behavior. For advanced branching logic inside loops, see n8n IF & Switch node branching guide.

How do you nest SplitInBatches nodes or use the Reset option for multi‑level iteration?

Nested SplitInBatches nodes (one inside another’s loop) are not supported by the current n8n node architecture. The recommended alternative is to move the inner loop into a separate sub‑workflow and invoke it from the outer loop via the Execute Workflow node, which achieves the same effect cleanly. [7]

The Reset parameter lets SplitInBatches treat each incoming payload as a fresh dataset. When enabled, the node restarts its internal batch counter and re‑indexes from zero. This is essential for paginated APIs where each page is a new list—without Reset, n8n would append pages together and never reach “no items left.” Set the Reset condition to {{$node["SplitInBatches"].context["noItemsLeft"]}} for automatic restart on each new data load [5]. For reusing modular logic across loops, explore n8n sub‑workflow modular reuse.

⚠️ Key Limitation: SplitInBatches keeps the workflow execution alive until every batch finishes. For tens of thousands of items, design your loop to be idempotent and add a Wait node between batches so one failure doesn’t lose all progress. If the execution stops mid‑loop, n8n does not checkpoint—you must restart from the beginning. [2]

How do you scale SplitInBatches loops for tens of thousands of items in production?

For production‑scale loops, combine SplitInBatches with a Wait node (1–5 seconds between batches), an error workflow that retries on failure without reprocessing already‑completed batches, and idempotent processing logic—each item should be safe to re‑run. Monitor execution memory with N8N_LOG_LEVEL=debug. [8]

When a single execution is too risky for very long loops, off‑load the actual processing into a sub‑workflow via the Execute Workflow node so each batch runs as an independent execution. This isolates failures and makes progress visible in the n8n execution log. Pair this pattern with the queue‑mode architecture described in the n8n architecture & scaling guide to distribute batches across workers.

References

This guide is for informational purposes only. For the most current and authoritative information, always refer to the official n8n website (n8n.io) and the n8n documentation. Product details and features may change over time.

Leave a Reply

Your email address will not be published. Required fields are marked *