Score HubSpot leads based on firmographic and technographic fit using a Claude Code skill

low complexityCost: Usage-based

Prerequisites

Compatible agents

This skill works with any agent that supports the Claude Code skills standard, including Claude Code, Claude Cowork, OpenAI Codex, and Google Antigravity.

Prerequisites
  • One of the agents listed above
  • HubSpot private app with crm.objects.contacts.read and crm.objects.contacts.write scopes
  • A custom HubSpot contact property for the fit score (e.g., icp_fit_score, number type, 0-100)
Environment Variables
# HubSpot private app token (Settings > Integrations > Private Apps)
HUBSPOT_ACCESS_TOKEN=your_value_here

Why a Claude Code skill?

The other approaches in this guide are automated: they score every contact the same way, every time. An Claude Code skill is different. You tell Claude what you want in plain language, and the skill gives it enough context to do it reliably.

That means you can say:

  • "Score all unscored contacts based on ICP fit"
  • "Re-score everyone — I changed the weights for industry"
  • "What's the score distribution? How many contacts are in each tier?"

The skill contains workflow guidelines, API reference materials, and a scoring model template that the agent reads on demand. When you invoke the skill, Claude reads these files, writes a script on the fly, runs it, and reports results. If you ask for something different next time — different weights, a filtered segment, a distribution analysis — the agent adapts without you touching any code.

How it works

The skill directory has three parts:

  1. SKILL.md — workflow guidelines telling the agent what steps to follow, which env vars to use, and what pitfalls to avoid
  2. references/ — HubSpot API patterns (contact search, batch update, property creation) so the agent calls the right APIs with the right parameters
  3. templates/ — a scoring model template defining the default weights for each criterion

When invoked, the agent reads SKILL.md, consults the reference and template files as needed, writes a Python script, executes it, and reports what it scored. The reference files act as guardrails — the agent knows exactly which endpoints to hit and what the responses look like, so it doesn't have to guess.

What is a Claude Code skill?

An Claude Code skill is a reusable command you add to your project that Claude Code can run on demand. Skills live in a .claude/skills/ directory and are defined by a SKILL.md file that tells the agent what the skill does, when to run it, and what tools it's allowed to use.

In this skill, the agent doesn't run a pre-written script. Instead, SKILL.md provides workflow guidelines and points to reference files — API documentation, scoring templates — that the agent reads to generate and execute code itself. This is the key difference from a traditional script: the agent can adapt its approach based on what you ask for while still using the right APIs and scoring models.

Once installed, you can invoke a skill as a slash command (e.g., /lead-scoring), or the agent will use it automatically when you give it a task where the skill is relevant. Skills are portable — anyone who clones your repo gets the same commands.

Step 1: Create the skill directory

mkdir -p .claude/skills/lead-scoring/{templates,references}

This creates the layout:

.claude/skills/lead-scoring/
├── SKILL.md                          # workflow guidelines + config
├── templates/
│   └── scoring-model.md              # Default scoring weights and criteria
└── references/
    └── hubspot-contacts-api.md       # HubSpot CRM search and update API patterns

Step 2: Write the SKILL.md

Create .claude/skills/lead-scoring/SKILL.md:

---
name: lead-scoring
description: Score or re-score HubSpot contacts based on firmographic and technographic fit
disable-model-invocation: true
allowed-tools: Bash, Read
---
 
## Goal
 
Score HubSpot contacts based on ICP fit using firmographic criteria (company size, industry, seniority) and write the score to the `icp_fit_score` custom property. Default: score only unscored contacts. The user may request a full re-score.
 
## Configuration
 
Read these environment variables:
 
- `HUBSPOT_ACCESS_TOKEN` — HubSpot private app token (required)
 
Default mode: score only contacts missing the `icp_fit_score` property.
The user may say "re-score all" to recalculate every contact.
 
## Workflow
 
1. Validate that `HUBSPOT_ACCESS_TOKEN` is set. If missing, print the error and exit.
2. Search for contacts using the HubSpot CRM Search API. Default: filter to contacts without `icp_fit_score`. If re-scoring all, omit the filter. See `references/hubspot-contacts-api.md`.
3. For each contact, apply the scoring model from `templates/scoring-model.md` to calculate a 0-100 score.
4. Update each contact's `icp_fit_score` property via the HubSpot API. See `references/hubspot-contacts-api.md`.
5. Print each contact's name and score, then a summary with the total count and distribution by tier (Hot 80+, Warm 50-79, Cold <50).
 
## Important notes
 
- The `icp_fit_score` custom property must exist in HubSpot before running. If it doesn't, instruct the user to create it (Settings > Properties > Contact Properties, Number type, 0-100).
- HubSpot stores `numberofemployees` as a string. Parse it as an integer, handling empty strings and non-numeric values gracefully.
- Industry values in HubSpot are lowercase strings like "computer software" or "financial services".
- HubSpot allows 150 API requests per 10 seconds. For large contact lists, add a small delay between updates or use the batch update endpoint.
- Use the `requests` library for HTTP calls. Install with pip if needed.
- Paginate search results using the `after` cursor from `paging.next.after`.

Understanding the SKILL.md

SectionPurpose
GoalTells the agent what outcome to produce
ConfigurationWhich env vars to read and what defaults to use
WorkflowNumbered steps with pointers to reference files
Important notesNon-obvious context that prevents common mistakes

Step 3: Add reference files

references/hubspot-contacts-api.md

Create .claude/skills/lead-scoring/references/hubspot-contacts-api.md:

# HubSpot Contacts API Reference
 
## Authentication
 
All HubSpot API requests use Bearer token auth.
 
```
Authorization: Bearer <HUBSPOT_ACCESS_TOKEN>
Content-Type: application/json
```
 
## Search for contacts
 
Find contacts to score using the CRM Search API.
 
**Request:**
 
```
POST https://api.hubapi.com/crm/v3/objects/contacts/search
Authorization: Bearer <token>
Content-Type: application/json
```
 
**Body (unscored contacts only):**
 
```json
{
  "filterGroups": [
    {
      "filters": [
        {
          "propertyName": "icp_fit_score",
          "operator": "NOT_HAS_PROPERTY"
        }
      ]
    }
  ],
  "properties": ["firstname", "lastname", "jobtitle", "company", "numberofemployees", "industry", "hs_analytics_source"],
  "limit": 100
}
```
 
**Body (all contacts — omit filterGroups):**
 
```json
{
  "properties": ["firstname", "lastname", "jobtitle", "company", "numberofemployees", "industry", "hs_analytics_source"],
  "limit": 100
}
```
 
**Response shape:**
 
```json
{
  "total": 250,
  "results": [
    {
      "id": "12345",
      "properties": {
        "firstname": "Sarah",
        "lastname": "Chen",
        "jobtitle": "VP of Sales",
        "company": "Acme Corp",
        "numberofemployees": "250",
        "industry": "computer software",
        "hs_analytics_source": "ORGANIC_SEARCH"
      }
    }
  ],
  "paging": {
    "next": {
      "after": "100"
    }
  }
}
```
 
Paginate using `after` cursor until `paging.next` is absent.
 
## Update a contact's score
 
**Request:**
 
```
PATCH https://api.hubapi.com/crm/v3/objects/contacts/{contact_id}
Authorization: Bearer <token>
Content-Type: application/json
```
 
**Body:**
 
```json
{
  "properties": {
    "icp_fit_score": "82"
  }
}
```
 
Note: Property values are strings, even for number properties.
 
## Batch update (for large volumes)
 
```
POST https://api.hubapi.com/crm/v3/objects/contacts/batch/update
```
 
**Body:**
 
```json
{
  "inputs": [
    {"id": "12345", "properties": {"icp_fit_score": "82"}},
    {"id": "67890", "properties": {"icp_fit_score": "45"}}
  ]
}
```
 
Up to 100 contacts per batch request. Use this for large volumes to stay within rate limits.
 
## Rate limits
 
150 requests per 10 seconds for private apps. For 150+ contacts, use the batch endpoint or add delays.

templates/scoring-model.md

Create .claude/skills/lead-scoring/templates/scoring-model.md:

# Lead Scoring Model
 
Default scoring weights. Total max: 100 points.
 
## Company Size (0-30 points)
 
| Employee Count | Points |
|---------------|--------|
| 200-2,000 | 30 (ideal mid-market) |
| 50-199 | 20 (growing company) |
| 2,001-10,000 | 15 (enterprise) |
| 1-49 or 10,001+ | 5 (outside sweet spot) |
| Unknown/0 | 0 |
 
Parse `numberofemployees` as integer. Handle empty strings and non-numeric values as 0.
 
## Industry Match (0-25 points)
 
| Industry | Points |
|----------|--------|
| saas, technology, software, computer software | 25 (ideal) |
| financial services, consulting, marketing | 15 (good fit) |
| Any other known industry | 5 |
| Unknown/empty | 0 |
 
Match using lowercase comparison against HubSpot's `industry` property.
 
## Job Title Seniority (0-30 points)
 
| Title Keywords | Points |
|---------------|--------|
| ceo, cto, cfo, coo, cmo, cro, chief | 30 (C-suite) |
| vp, vice president, head of | 25 (VP-level) |
| director | 20 |
| manager, lead | 10 |
| Other/unknown | 0 |
 
Match using case-insensitive substring search on `jobtitle`.
 
## Traffic Source (0-15 points)
 
| Source | Points |
|--------|--------|
| ORGANIC_SEARCH | 15 |
| DIRECT_TRAFFIC | 12 |
| REFERRALS | 10 |
| PAID_SEARCH | 8 |
| SOCIAL_MEDIA | 5 |
| Other/unknown | 0 |
 
Match against `hs_analytics_source` (uppercase in HubSpot).
 
## Score Tiers
 
| Tier | Range |
|------|-------|
| Hot | 80-100 |
| Warm | 50-79 |
| Cold | 0-49 |
 
## Customization
 
The user may request different weights, additional criteria, or modified tier boundaries. Adjust the scoring model accordingly while keeping the total max at 100.

Step 4: Test the skill

Invoke the skill conversationally:

/lead-scoring

Claude will read the SKILL.md, check the reference files, write a script, run it, and report the results. A typical run looks like:

Scoring unscored contacts...
  Found 45 contacts to score
 
  Sarah Chen (VP of Sales, Acme Corp): 82/100
  Mike Johnson (Director, Widget Inc): 55/100
  ...
 
Done. Scored 45 contacts.
  Hot (80+): 8
  Warm (50-79): 19
  Cold (below 50): 18

Because the agent generates code on the fly, you can also make ad hoc requests:

  • "Re-score all contacts — I changed the industry weights" — the agent adjusts and re-runs
  • "What's the current score distribution?" — the agent queries and summarizes without re-scoring
  • "Score only contacts from the last 30 days" — the agent adds a date filter
Test with a small batch first

Run the skill on a few contacts initially to verify the scoring model produces the distribution you expect. If most contacts cluster at one end, adjust the weights in templates/scoring-model.md.

Step 5: Schedule it (optional)

Option A: Cron + Claude CLI

# Score unscored contacts every hour
0 * * * * cd /path/to/your/project && claude -p "Run /lead-scoring" --allowedTools 'Bash(*)' 'Read(*)'

Option B: GitHub Actions + Claude

name: Lead Scoring
on:
  schedule:
    - cron: '0 * * * *'  # Every hour
  workflow_dispatch: {}   # Manual trigger for testing
jobs:
  score:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: anthropics/claude-code-action@v1
        with:
          prompt: "Run /lead-scoring"
          allowed_tools: "Bash(*),Read(*)"
        env:
          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
          HUBSPOT_ACCESS_TOKEN: ${{ secrets.HUBSPOT_ACCESS_TOKEN }}

Option C: Cowork Scheduled Tasks

Claude Desktop's Cowork supports built-in scheduled tasks. Open a Cowork session, type /schedule, and configure the cadence — hourly, daily, weekly, or weekdays only. Each scheduled run has full access to your connected tools, plugins, and MCP servers.

Scheduled tasks only run while your computer is awake and Claude Desktop is open. If a run is missed, Cowork executes it automatically when the app reopens. For always-on scheduling, use GitHub Actions (Option B) instead. Available on all paid plans (Pro, Max, Team, Enterprise).

Troubleshooting

When to use this approach

  • You just updated your ICP or scoring weights and need a bulk re-score
  • You want to score a specific segment on demand during pipeline review
  • You want to analyze scoring distribution without deploying an always-on automation
  • You're testing different scoring models before committing to one

When to switch approaches

  • You need automatic scoring on every new contact → use n8n or Zapier
  • You want a visual workflow builder → use n8n or Make
  • You need zero per-execution cost → use the Code + Cron approach

Common questions

Why not just use a script?

A script runs the same way every time. The Claude Code skill adapts to what you ask — different weights, filtered segments, distribution analysis, re-scoring after model changes. The reference files ensure it calls the right APIs even when improvising, so you get flexibility without sacrificing reliability.

Does this use Claude API credits?

Yes. Unlike a script-based approach, the agent reads skill files and generates code each time. Typical cost is $0.01-0.05 per invocation. The HubSpot API itself is free.

Can I use this alongside automated scoring?

Yes. Use n8n or Zapier for real-time scoring of new contacts, and the Claude Code skill for bulk re-scores after model changes or for ad hoc analysis.

Cost

  • Claude API — $0.01-0.05 per invocation (the agent reads files and generates code)
  • HubSpot API — included in all plans, no per-call cost
  • GitHub Actions (if scheduled) — free tier includes 2,000 minutes/month

Looking to scale your AI operations?

We build and optimize automation systems for mid-market businesses. Let's discuss the right approach for your team.