Alert Slack before Zendesk SLA breaches using an agent skill
Prerequisites
- Claude Code, Cursor, or another AI coding agent that supports skills
- Zendesk account with API access enabled
- Slack bot with
chat:writepermission added to the target channel
Overview
This agent skill queries the Zendesk ticket metrics API for all open tickets, checks how much SLA time remains on each, and posts a Slack Block Kit alert for any ticket approaching breach. No Claude AI calls are needed — it's a pure API integration script that runs as a skill for easy invocation. You can trigger it manually, via skill invocation, or schedule it with cron or GitHub Actions.
Step 1: Create the skill directory
mkdir -p .claude/skills/zendesk-sla-watch/scriptsStep 2: Write the SKILL.md
Create .claude/skills/zendesk-sla-watch/SKILL.md:
---
name: zendesk-sla-watch
description: Checks open Zendesk tickets for approaching SLA breaches and sends Slack warnings.
disable-model-invocation: true
allowed-tools: Bash(python *)
---
Check for approaching SLA breaches:
1. Run: `python $SKILL_DIR/scripts/sla_check.py`
2. Review the output — it shows tickets approaching SLA breach with time remainingThe key settings:
disable-model-invocation: true— the skill has external side effects (posting to Slack), so it only runs when you explicitly invoke it with/zendesk-sla-watchallowed-tools: Bash(python *)— restricts execution to Python scripts only, preventing unintended shell commands
Step 3: Write the SLA check script
Create .claude/skills/zendesk-sla-watch/scripts/sla_check.py:
#!/usr/bin/env python3
"""
Zendesk SLA Breach Watch
Checks open tickets for approaching SLA breaches and alerts Slack.
"""
import os
import json
import base64
import urllib.request
import urllib.parse
SUBDOMAIN = os.environ["ZENDESK_SUBDOMAIN"]
EMAIL = os.environ["ZENDESK_EMAIL"]
TOKEN = os.environ["ZENDESK_API_TOKEN"]
SLACK_TOKEN = os.environ["SLACK_BOT_TOKEN"]
SLACK_CHANNEL = os.environ["SLACK_CHANNEL_ID"]
WARN_MINUTES = int(os.environ.get("SLA_WARN_MINUTES", "60"))
BASE_URL = f"https://{SUBDOMAIN}.zendesk.com/api/v2"
credentials = base64.b64encode(f"{EMAIL}/token:{TOKEN}".encode()).decode()
HEADERS = {
"Authorization": f"Basic {credentials}",
"Content-Type": "application/json",
}
def zendesk_get(path: str) -> dict:
req = urllib.request.Request(f"{BASE_URL}{path}", headers=HEADERS)
with urllib.request.urlopen(req) as resp:
return json.loads(resp.read())
def slack_post(blocks: list, text: str) -> None:
data = json.dumps({"channel": SLACK_CHANNEL, "text": text, "blocks": blocks}).encode()
req = urllib.request.Request(
"https://slack.com/api/chat.postMessage",
data=data,
headers={"Authorization": f"Bearer {SLACK_TOKEN}", "Content-Type": "application/json"},
)
with urllib.request.urlopen(req) as resp:
result = json.loads(resp.read())
if not result.get("ok"):
print(f" Slack error: {result.get('error')}")
def check_sla(metric: dict, metric_name: str) -> dict | None:
"""Check if a time-based SLA metric is approaching breach."""
if not metric or metric.get("is_completed"):
return None
target = metric.get("target")
elapsed = metric.get("elapsed")
if target is None or elapsed is None:
return None
remaining = target - elapsed
if 0 < remaining <= WARN_MINUTES:
return {"metric": metric_name, "remaining": remaining, "target": target}
return None
def main() -> None:
print(f"Checking SLA status (warning threshold: {WARN_MINUTES} min)...\n")
query = urllib.parse.quote("type:ticket status:open status:pending status:new")
tickets = zendesk_get(f"/search.json?query={query}&per_page=100").get("results", [])
print(f"Found {len(tickets)} open tickets to check\n")
warnings = []
for ticket in tickets:
tid = ticket["id"]
try:
metrics_data = zendesk_get(f"/tickets/{tid}/metrics.json")
except Exception as e:
print(f" #{tid} Could not fetch metrics: {e}")
continue
tm = metrics_data.get("ticket_metric", {})
for field, label in [
("reply_time_in_minutes", "First Reply Time"),
("full_resolution_time_in_minutes", "Resolution Time"),
]:
sla_data = tm.get(field, {}).get("business")
warning = check_sla(sla_data, label)
if warning:
warning["ticketId"] = tid
warning["subject"] = ticket.get("subject", "Unknown")
warning["priority"] = ticket.get("priority", "normal")
warning["assignee_id"] = ticket.get("assignee_id")
warnings.append(warning)
if not warnings:
print("No tickets approaching SLA breach.")
return
print(f"Found {len(warnings)} SLA warning(s):\n")
for w in warnings:
ticket_url = f"https://{SUBDOMAIN}.zendesk.com/agent/tickets/{w['ticketId']}"
print(f" #{w['ticketId']} {w['subject'][:50]!r} {w['metric']}: {w['remaining']} min remaining")
blocks = [
{"type": "header", "text": {"type": "plain_text", "text": "SLA Breach Warning", "emoji": True}},
{"type": "section", "fields": [
{"type": "mrkdwn", "text": f"*Ticket:* <{ticket_url}|#{w['ticketId']}>"},
{"type": "mrkdwn", "text": f"*Subject:* {w['subject']}"},
{"type": "mrkdwn", "text": f"*SLA Metric:* {w['metric']}"},
{"type": "mrkdwn", "text": f"*Time Remaining:* {w['remaining']} min"},
]},
{"type": "section", "fields": [
{"type": "mrkdwn", "text": f"*Priority:* {w['priority']}"},
{"type": "mrkdwn", "text": f"*Target:* {w['target']} min"},
]},
{"type": "actions", "elements": [
{"type": "button", "text": {"type": "plain_text", "text": "View Ticket"}, "url": ticket_url}
]},
]
slack_post(blocks, f"SLA warning: #{w['ticketId']} — {w['metric']} breach in {w['remaining']} min")
print(f"\nSent {len(warnings)} Slack alert(s).")
if __name__ == "__main__":
main()Troubleshooting
What the script does
- Searches for open tickets — uses the Zendesk Search API to find all tickets with status
open,pending, ornew - Fetches per-ticket SLA metrics — for each ticket, calls
GET /tickets/{id}/metrics.jsonto retrieve business-hours elapsed and target times for first reply and full resolution - Checks for approaching breaches — compares elapsed time against the SLA target for both "First Reply Time" and "Resolution Time," flagging any metric where the remaining time is within the warning threshold (default 60 minutes) but not yet completed
- Posts Slack alerts — sends a Block Kit message for each at-risk SLA metric with the ticket number, subject, specific SLA metric name, time remaining, priority, and a "View Ticket" button linking to the Zendesk agent view
- Logs all warnings — prints a summary of each ticket and metric that triggered an alert, along with the total count sent
The script uses only urllib.request from the standard library — no pip dependencies required.
Step 4: Run the skill
# Via Claude Code
/zendesk-sla-watch
# Or run the script directly
python .claude/skills/zendesk-sla-watch/scripts/sla_check.pyA typical run:
Checking SLA status (warning threshold: 60 min)...
Found 42 open tickets to check
Found 3 SLA warning(s):
#28401 'Order stuck in processing for 3 days' Resolution Time: 45 min remaining
#28415 'Cannot access my account after migration' First Reply Time: 22 min remaining
#28419 'API returning 500 errors since update' First Reply Time: 51 min remaining
Sent 3 Slack alert(s).Step 5: Schedule it
Option A: Cron
# crontab -e — run every 15 minutes during business hours on weekdays
*/15 8-18 * * 1-5 cd /path/to/project && python .claude/skills/zendesk-sla-watch/scripts/sla_check.pyFor 24/7 SLA monitoring (e.g., chat support), remove the hour restriction:
*/15 * * * * cd /path/to/project && python .claude/skills/zendesk-sla-watch/scripts/sla_check.pyOption B: GitHub Actions
name: Zendesk SLA Watch
on:
schedule:
- cron: '*/15 13-22 * * 1-5' # 8 AM–5 PM ET, weekdays
workflow_dispatch: {}
jobs:
sla-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- run: python .claude/skills/zendesk-sla-watch/scripts/sla_check.py
env:
ZENDESK_SUBDOMAIN: ${{ secrets.ZENDESK_SUBDOMAIN }}
ZENDESK_EMAIL: ${{ secrets.ZENDESK_EMAIL }}
ZENDESK_API_TOKEN: ${{ secrets.ZENDESK_API_TOKEN }}
SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}
SLACK_CHANNEL_ID: ${{ secrets.SLACK_CHANNEL_ID }}Set the SLA_WARN_MINUTES environment variable to control how far ahead to warn. Default is 60 minutes. For high-priority queues with tight SLAs, consider 120 minutes so agents have more lead time.
This script does not track which tickets have already been alerted. If a ticket remains within the warning window across two consecutive 15-minute runs, it will receive a second Slack alert. To prevent duplicates, add a deduplication layer — either tag the ticket in Zendesk (e.g., sla-warning-sent) after alerting, or maintain a local JSON file of recently alerted ticket IDs.
Cost
- No Claude API calls — this is a pure API integration
- Zendesk API calls scale with ticket volume (~1 call per open ticket per run)
- Slack API: free for posting messages
- Server/CI cost: negligible (runs for a few seconds every 15 minutes)
Need help implementing this?
We build and optimize automation systems for mid-market businesses. Let's discuss the right approach for your team.