Auto-archive stale HubSpot deals using an agent skill
Prerequisites
- Claude Code or compatible agent
- Environment variables:
HUBSPOT_TOKEN,SLACK_BOT_TOKEN,SLACK_CHANNEL_ID
Overview
This skill gives you an on-demand "pipeline cleanup" command. When you invoke it, the agent finds stale deals, shows you a summary, and asks for confirmation before closing anything. Unlike the automated approaches, you stay in the loop on every close decision.
This is ideal for teams that want human judgment on which deals to archive rather than a fully automated system.
Step 1: Create the skill directory
mkdir -p .claude/skills/archive-stale-deals/scriptsStep 2: Write the SKILL.md file
Create .claude/skills/archive-stale-deals/SKILL.md:
---
name: archive-stale-deals
description: Find HubSpot deals with no activity for 60+ days. Shows a summary of stale deals, warns owners in Slack, and closes deals after confirmation.
disable-model-invocation: true
allowed-tools: Bash(python *)
---
Archive stale deals from HubSpot with owner notification.
## Steps
1. Run: `python $SKILL_DIR/scripts/find_stale.py`
2. Review the list of stale deals printed to the console
3. Ask the user which deals to archive (all, specific ones, or none)
4. For confirmed deals, run: `python $SKILL_DIR/scripts/archive_deals.py <deal_id1> <deal_id2> ...`
5. Report the resultsKey configuration:
disable-model-invocation: true— this skill modifies deal data in HubSpot, so only you can trigger it with/archive-stale-deals- The two-script design means the agent always pauses for your confirmation between finding and closing
Step 3: Write the discovery script
Create .claude/skills/archive-stale-deals/scripts/find_stale.py:
#!/usr/bin/env python3
"""Find HubSpot deals with no activity for 60+ days."""
import os, sys, json, requests
from datetime import datetime, timedelta, timezone
HUBSPOT_TOKEN = os.environ.get("HUBSPOT_TOKEN")
if not HUBSPOT_TOKEN:
print("ERROR: HUBSPOT_TOKEN not set")
sys.exit(1)
HEADERS = {"Authorization": f"Bearer {HUBSPOT_TOKEN}",
"Content-Type": "application/json"}
STALE_DAYS = 60
# Fetch stage labels
stages_resp = requests.get(
"https://api.hubapi.com/crm/v3/pipelines/deals", headers=HEADERS)
stages_resp.raise_for_status()
stage_map = {}
for p in stages_resp.json()["results"]:
for s in p["stages"]:
stage_map[s["id"]] = s["label"]
# Fetch owner names
owners_resp = requests.get(
"https://api.hubapi.com/crm/v3/owners", headers=HEADERS)
owners_resp.raise_for_status()
owner_map = {}
for owner in owners_resp.json()["results"]:
owner_map[owner["id"]] = f"{owner['firstName']} {owner['lastName']}"
# Search for stale deals
cutoff_ms = int(
(datetime.now(timezone.utc) - timedelta(days=STALE_DAYS)).timestamp()
* 1000
)
all_deals = []
after = 0
while True:
resp = requests.post(
"https://api.hubapi.com/crm/v3/objects/deals/search",
headers=HEADERS,
json={
"filterGroups": [{"filters": [
{"propertyName": "hs_lastmodifieddate",
"operator": "LT", "value": str(cutoff_ms)},
{"propertyName": "dealstage",
"operator": "NOT_IN",
"values": ["closedwon", "closedlost"]}
]}],
"properties": ["dealname", "amount", "dealstage",
"hubspot_owner_id", "hs_lastmodifieddate"],
"sorts": [{"propertyName": "hs_lastmodifieddate",
"direction": "ASCENDING"}],
"limit": 100,
"after": after,
},
)
resp.raise_for_status()
data = resp.json()
all_deals.extend(data["results"])
if data.get("paging", {}).get("next"):
after = data["paging"]["next"]["after"]
else:
break
if not all_deals:
print("No stale deals found. Pipeline is clean!")
sys.exit(0)
# Print summary
print(f"\nFound {len(all_deals)} stale deals "
f"(no activity for {STALE_DAYS}+ days):\n")
print(f"{'ID':<15} {'Deal Name':<35} {'Owner':<20} "
f"{'Stage':<20} {'Amount':>12} {'Days Stale':>10}")
print("-" * 115)
for deal in all_deals:
p = deal["properties"]
days = (datetime.now(timezone.utc) - datetime.fromisoformat(
p["hs_lastmodifieddate"].replace("Z", "+00:00"))).days
owner = owner_map.get(p.get("hubspot_owner_id", ""), "Unassigned")
stage = stage_map.get(p.get("dealstage", ""), p.get("dealstage", ""))
amount = float(p.get("amount") or 0)
print(f"{deal['id']:<15} {p['dealname']:<35} {owner:<20} "
f"{stage:<20} ${amount:>11,.0f} {days:>10}")
total = sum(float(d["properties"].get("amount") or 0) for d in all_deals)
print(f"\nTotal pipeline value at risk: ${total:,.0f}")Step 4: Write the archive script
Create .claude/skills/archive-stale-deals/scripts/archive_deals.py:
#!/usr/bin/env python3
"""Archive specified HubSpot deals to Closed Lost and notify owners."""
import os, sys, requests
from slack_sdk import WebClient
try:
from slack_sdk import WebClient
except ImportError:
os.system("pip install slack_sdk -q")
from slack_sdk import WebClient
HUBSPOT_TOKEN = os.environ.get("HUBSPOT_TOKEN")
SLACK_TOKEN = os.environ.get("SLACK_BOT_TOKEN")
SLACK_CHANNEL = os.environ.get("SLACK_CHANNEL_ID")
if not all([HUBSPOT_TOKEN, SLACK_TOKEN, SLACK_CHANNEL]):
print("ERROR: Missing env vars: HUBSPOT_TOKEN, "
"SLACK_BOT_TOKEN, SLACK_CHANNEL_ID")
sys.exit(1)
HEADERS = {"Authorization": f"Bearer {HUBSPOT_TOKEN}",
"Content-Type": "application/json"}
slack = WebClient(token=SLACK_TOKEN)
deal_ids = sys.argv[1:]
if not deal_ids:
print("Usage: python archive_deals.py <deal_id1> <deal_id2> ...")
sys.exit(1)
print(f"Archiving {len(deal_ids)} deals...\n")
closed = []
failed = []
for deal_id in deal_ids:
# Fetch deal details for the Slack message
resp = requests.get(
f"https://api.hubapi.com/crm/v3/objects/deals/{deal_id}",
headers=HEADERS,
params={"properties": "dealname,amount,hubspot_owner_id"}
)
if resp.status_code == 404:
print(f" {deal_id}: not found, skipping")
continue
resp.raise_for_status()
deal = resp.json()
name = deal["properties"]["dealname"]
# Close the deal
close_resp = requests.patch(
f"https://api.hubapi.com/crm/v3/objects/deals/{deal_id}",
headers=HEADERS,
json={"properties": {
"dealstage": "closedlost",
"closed_lost_reason": "Stale — auto-archived after 60 days"
}}
)
if close_resp.ok:
closed.append(name)
print(f" Closed: {name}")
else:
failed.append(name)
print(f" FAILED: {name} — {close_resp.status_code}")
# Notify Slack
if closed:
slack.chat_postMessage(
channel=SLACK_CHANNEL,
text=f"Auto-archived {len(closed)} stale deals",
blocks=[
{"type": "section", "text": {"type": "mrkdwn", "text":
f"🗄️ *{len(closed)} Deals Auto-Archived*\n"
+ "\n".join(f"• {n}" for n in closed)
+ "\n\nThese deals had no activity for 60+ days."
}},
],
unfurl_links=False,
)
print(f"\nDone — closed: {len(closed)}, failed: {len(failed)}")Step 5: Run the skill
Invoke it from Claude Code:
/archive-stale-dealsThe agent runs the discovery script first and shows you a table of stale deals. It then asks which deals you want to archive. You can say:
- "Archive all of them"
- "Archive everything except deal 12345"
- "Just archive the ones owned by Jane"
- "Skip it for now"
The agent passes your confirmed deal IDs to the archive script.
When to use this approach
- You want human review before any deal is closed
- You run pipeline cleanup as part of a weekly review meeting — invoke the skill, discuss the list, decide together
- You want to selectively archive — keep some deals, close others based on context the script can't know
- You're already using Claude Code and want pipeline cleanup as a quick command
When to graduate to an automated approach
- You trust the 60-day threshold enough to run it without manual review
- You have dozens of stale deals per week and reviewing each one isn't practical
- You want the grace period / warn-then-close pattern to run unattended
Start with this skill to build confidence in the thresholds. Once you trust that 60 days is the right cutoff and the Slack warnings give reps enough notice, switch to the n8n or code approach for full automation.
Need help implementing this?
We build and optimize automation systems for mid-market businesses. Let's discuss the right approach for your team.