Affiliate/Ads disclaimer: Some links on this blog are affiliate/ads links to support this project going on, meaning I may earn a commission at no extra cost to you.
n8n Slack + PagerDuty Incident Workflow: End-to-End Guide
This guide builds a complete Slack + PagerDuty incident response workflow in n8n. When a monitoring tool sends an alert via webhook, the workflow normalises the payload, creates an incident in PagerDuty through the Events API v2, posts a formatted message to a Slack channel, waits for an engineer to acknowledge, and escalates to the next responder if no one responds within a configurable timeout. No code is required beyond the nodes described below [1] [2].
How do you receive alerts from monitoring tools with a Webhook trigger in n8n?
Add a Webhook node as the first node. Configure it to
listen via POST and choose JSON as
the response format. Activate the workflow, then copy the
Production URL. Point your monitoring tool—Prometheus
AlertManager, Datadog, Grafana—to this URL. Every incoming alert
payload becomes available under $json.body.
[5]
For Prometheus AlertManager, set the webhook receiver URL in
alertmanager.yml and enable send_resolved: true
so n8n is also notified when an alert clears. Other tools like Datadog
allow you to configure a webhook integration under
Integrations → Webhooks, and Grafana can send alerts
via its “Alerting” section. Regardless of the source, every
alert payload becomes available under $json.body for
downstream processing [5].
Once the webhook receives data, the next step is to classify its
severity before routing. For more on trigger patterns, see
n8n Trigger Types explained.
How do you parse and normalise alert payloads with a Code node in n8n?
Place a Code node right after the Webhook node, set to Run Once for All Items. Write a short JavaScript function that extracts the alert name, severity, description, source instance, and timestamp from the incoming payload. Map every field into a flat object with consistent keys regardless of the monitoring tool. [5]
The Code node extracts fields such as alertname,
severity, and instance from the incoming
payload and maps them into a flat object. It also determines whether
the alert fired during business hours (e.g., 9 am–5 pm UTC) and
calculates how long the alert has been active in minutes. This
classification is what the downstream Switch node reads to decide
whether to escalate to PagerDuty. For more Code node examples, see the
n8n Code Node transformation guide.
| Source Tool | Payload Format | Key Fields to Extract | Code Node Pattern |
|---|---|---|---|
| Prometheus AlertManager | Array of alerts in body.alerts |
labels.alertname, labels.severity, annotations.description |
alerts.map(a => ({ json: { name: a.labels.alertname, severity: a.labels.severity } })) |
| Datadog | Single JSON object | title, alert_type, body |
Extract single object → wrap in array |
| Grafana | Single JSON with state and message |
title, state, message |
Direct field mapping into json keys |
How do you create an incident in PagerDuty via the HTTP Request node and Events API v2?
Add an HTTP Request node configured to POST to
https://events.pagerduty.com/v2/enqueue. In the
Headers panel, set Content-Type to
application/json. In the Body, select
JSON and construct a payload with
routing_key set to your Events API v2 integration key,
event_action: "trigger", and a payload object
containing summary, source, and
severity.
[4]
The Events API v2 supports three actions: trigger
(creates a new incident), acknowledge (acknowledges
an existing incident), and resolve (resolves an
incident). For incident creation, the minimal required JSON body is:
{
"routing_key": "YOUR_INTEGRATION_KEY",
"event_action": "trigger",
"payload": {
"summary": "CPU usage critical",
"source": "prometheus",
"severity": "critical"
}
}
PagerDuty returns a dedup_key which is essential for
subsequent acknowledge and resolve calls. To later acknowledge or resolve
the same alert, reuse this dedup_key and change the event_action to
"acknowledge" or "resolve" [6].
For a broader look at incident lifecycle automation, see
n8n AI Agents & LLM Orchestration.
How do you post a formatted incident notification to a Slack channel?
Place a Slack node after the PagerDuty HTTP Request node. Choose the Send a Message operation, select the target channel from the dropdown (or input its ID), and construct a rich-text message including the incident summary, severity, source, and a direct link to the PagerDuty incident using the dedup_key from the previous response. [1]
The Slack node uses OAuth 2.0 credentials. In the credentials
configuration, make sure at least the chat:write scope
is granted. The bot must be invited to the target channel. A Set node
upstream can build the message body by concatenating fields from the
Code node. A typical format reads:
“🚨 Incident #426 triggered — CPU usage critical on prod-web-01
(Severity: critical) — Acknowledge: [link]”.
For a complete walkthrough of Slack credential setup and message
formatting best practices, refer to the
n8n for DevOps guide.
How do you wait for an engineer to acknowledge the Slack alert and escalate on timeout?
Add a Wait node to pause the workflow for a configurable duration (e.g., 5 minutes). After the wait, use a PagerDuty Trigger node or an HTTP Request node to query the PagerDuty REST API and check the incident’s status. Use an IF node: if the status is still triggered (not acknowledged), escalate by posting a second Slack message to a manager channel and optionally creating an additional alert. [7]
Alternatively, use the Slack Send and Wait for Response
operation—it pauses the workflow until a specific user replies or
reacts. If the user does not respond within a configured timeout, the
workflow continues along a “timed out” branch where the escalation
logic runs. The PagerDuty Trigger community node
(n8n-nodes-pagerduty-trigger) offers tighter integration
by natively listening for incident status changes, eliminating the
need to poll the REST API [8].
For detailed error handling and retry patterns in escalation workflows,
see the
n8n Error Workflow: Catch, Retry & Alert guide.
What does the full end-to-end incident workflow look like from alert to resolution?
The full workflow connects six core nodes in sequence: Webhook (receives alert) → Code (normsalises payload) → Switch (routes by severity and business hours) → HTTP Request → PagerDuty (creates incident) → Slack (notifies on-call engineer) → Wait+IF (acknowledgment check and escalation). On resolution, the entire flow can be reversed: a PagerDuty resolve event triggers a Slack notification. [2] [5]
For resolution handling, use a separate workflow triggered by the PagerDuty Webhook listening for the “incident.resolved” event. The resolution workflow can calculate the Mean Time to Resolution (MTTR) from the incident’s timestamps, post a resolution summary to the same Slack channel, and update the original message with a ✅ reaction or resolved status [2]. For a complete observability loop that logs every incident to Google Sheets for post-mortem analysis, see n8n for Data Pipelines guide.
References
- n8n Documentation — Slack node: Operations, credentials, and message formatting
- n8n Workflow Template — Automate Incident Management with PagerDuty, Port AI, Jira & Slack
- n8n Documentation — Slack credentials: OAuth 2.0 setup and scopes
- PagerDuty Developer Documentation — Create a Webhook Subscription
- dev.to — Automated Incident Response Workflows with n8n and Monitoring Tools (Jun 2025)
- Shoutrrr pagerduty package — Events API v2: trigger, acknowledge, resolve actions
- n8n.blog — Automated Incident Routing & Escalation for On-Call Engineers (Oct 2025)
- jsDelivr — n8n-nodes-pagerduty-trigger: Community trigger node for PagerDuty events

