Steal These 10 AI Agents and 10x Your Marketing Output
Here’s how to build each one, including full prompts
It’s no surprise that one of the most frequent questions I get asked is, “What am I building?” Another one is “What are you seeing others build?”
I’ve been collecting some of the coolest workflows and agents and I wanted to share 10 with you this week and tell you exactly how to build them.
Two types of agents show up in this list. The first is on-demand agents, which take a batch of inputs you hand them, like a set of call recordings or a pipeline report, and return a structured output. You run them when you need them, and they take 10 minutes instead of three hours. The second is scheduled agents, which run automatically at a set interval. For example, on Monday morning, your weekly digest is already there. Sunday night your competitive monitor has already reviewed what changed.
So without further ado, let’s dive in.
1. The Stalled Deal Spotter
Outcome: You walk into every pipeline review with a deal-by-deal risk assessment and a specific marketing action for each stalled opportunity, instead of spending the first 20 minutes trying to reconstruct what happened from memory.
Why this matters: Pipeline reviews are mostly retrospective. Someone shares a slide showing 15 deals that haven’t moved in 30 days, and the room tries to diagnose each one on the fly. Every rep has a different story, but nothing gets resolved, and you leave with action items that go unfollowed up.
How to run it: On-demand, before every pipeline review. Pull the CRM notes, last email thread summary, and most recent call transcript for every deal that hasn’t progressed in 21 or more days. Paste them into Claude with the prompt below. What you get back is a structured breakdown by deal: most likely stall reason, the specific stakeholder or objection blocking progress, and a concrete marketing action for that deal specifically. Not “send a case study.” Something like “the last call mentioned budget pressure for this quarter — send the ROI calculator and ask the rep to request a CFO conversation.”
The prompt:
You are a B2B revenue intelligence analyst reviewing stalled pipeline on behalf of a marketing leader preparing for a pipeline review.
I will provide CRM notes, email thread summaries, and call transcript excerpts for deals that have not progressed in 21 or more days.
For each deal, analyze the available context and return your assessment in this format:
Deal: [name] Days stalled: [number] Most likely stall reason: [one sentence grounded in the evidence, not speculation] Blocking stakeholder or concern: [specific person or objection if the data shows it] Recommended marketing action: [specific — name the content angle, asset, or play relevant to this deal's exact situation, not a generic suggestion] Confidence: [High / Medium / Low] — [one sentence explaining why based on evidence quality]
If a deal appears to have gone dark with no recoverable signals, say so directly. Do not frame a dead deal as a reengagement opportunity. My goal is an accurate picture going into the meeting, not an optimistic one.
Deals to review: [paste CRM notes, email summaries, and call transcripts here]
Stack requirements: CRM with deal notes (Salesforce, HubSpot, or Pipedrive), call recording tool for transcripts (Gong, Chorus, Fathom, or Fireflies), email client for thread summaries (Gmail or Outlook).
To automate: HubSpot or Salesforce API connected to Claude, triggered on a weekly schedule or when deal age crosses a threshold you define.
Pro tips:
Feed it about 5 deals at a time for the best output quality. More than that and the per-deal analysis gets noticeably thinner.
Add the deal size and target close date to each input. The agent can factor urgency into its recommendations when it knows what’s at stake.
Share the output with your sales partner before the pipeline call, not in it. The best use is pre-call preparation, not real-time diagnosis in front of your CRO.
2. The Account Signal Monitor
Outcome: Every week, your BDR and AE teams receive a ranked short-list of the target accounts showing the strongest buying signals right now, with a one-line explanation of what changed and what to do about it.
Why this matters: ABM programs spend a lot of time on list-building and almost none on timing. A well-defined target account list sitting in a spreadsheet is doing nothing. The accounts on that list change every week: new executives joining, funding announced, job postings signaling a relevant initiative, G2 review activity picking up. Most marketing teams find out about these signals when a rep mentions it on a call, weeks after it happened, after a competitor has already moved.
How to run it: Scheduled, runs every Sunday night. You (or your ops person) pull recent signals for your top 50 to 100 accounts: job postings in relevant roles, executive changes, funding news, press mentions, and review site activity. The agent reviews all of it and returns the 10 accounts with the strongest signals this week, ranked by urgency, with a recommended action for each. Your demand gen team uses it to update paid audiences. Your BDR team uses it as their Monday morning call list.
The prompt:
You are a B2B account intelligence analyst helping a marketing team prioritize their target account list for the week ahead.
I will provide a list of target accounts with recent signals for each: job postings, executive changes, funding news, press mentions, and review site activity from the past 7 days.
Your job: Identify the 10 accounts showing the strongest buying intent or timing signals this week. Consider: Are they hiring in roles that suggest a relevant initiative? Has leadership changed? Did they raise funding or announce a new strategic direction? Are their G2 reviews signaling pain in a category we address?
For each of the 10 accounts, return:
Account: [name] Signal summary: [one sentence describing what changed and why it matters] Signal strength: [Strong / Moderate / Weak] with one sentence of reasoning Recommended action: [specific — who should reach out, what angle to lead with, or what content to send this week]
Rank the list from highest to lowest urgency. If fewer than 10 accounts have meaningful signals this week, say so rather than padding the list.
Account data to review: [paste account signal data here]
Stack requirements: To automate: Clay is the most efficient tool here because it can pull job postings, news, and LinkedIn signals in a single workflow and feed them directly to Claude. Another great option is Claude Code + Apify MCP. Apify has a native MCP server and pre-built actors for scraping LinkedIn job postings, Google News, and G2 reviews. Claude Code calls the Apify MCP to fetch signals across all your target accounts, runs the synthesis and ranking, and delivers the output on a weekly schedule via cron.
Pro tips:
A great signal is job postings in adjacent roles. A new “Head of Revenue Operations” post at a target account almost always precedes a tech stack review. Build this signal into your scoring.
Run it against your active pipeline accounts too, not just your cold target list. A buying signal inside an active deal can be the difference between a deal accelerating and going dark.
Add a simple scoring system to your prompt: 3 points for an executive hire, 2 for a funding round, 1 for a relevant job posting. Ask the agent to apply the score and sort by total. It makes prioritization less subjective.
3. The Win/Loss Synthesizer
Outcome: At the end of every quarter, you have a pattern analysis of your won and lost deals: the most common buying triggers, the objections that actually killed deals, and the language your buyers use to describe their pain before they ever heard of you.
Why this matters: Most marketing teams know they should run win/loss analysis consistently, but few actually do it because it’s time-consuming to pull together, awkward to coordinate with sales, and the outputs usually live in a Google Doc that nobody opens by the time the next planning cycle starts. So messaging stays based on assumptions that were formed 18 months ago, and the gap between how your buyers describe their problem and how your homepage describes it just keeps growing quietly.
How to run it: On-demand, once per quarter. Export your closed-won and closed-lost call transcripts from Gong, Chorus, Fathom, or even written summaries. Paste them in with the prompt below. What comes back is a structured analysis: the five most common buying triggers in won deals with direct quotes, the five most common objections in lost deals with direct quotes, and the customer-native language for the problem you solve before buyers knew your product existed. That last piece is where most positioning improvements come from. I’ve seen this output rewrite homepage copy, restructure email sequences, and settle six-month internal debates about what to lead with in the sales deck.
The prompt:
You are a B2B positioning analyst reviewing a quarter's worth of sales conversations to surface patterns that should improve marketing and messaging.
I will provide call transcripts and summaries from closed-won deals and closed-lost deals this quarter.
Analyze all transcripts and return the following, with direct quotes from the source material for each finding:
The five most common buying triggers in won deals: What was happening in the buyer's world that made them start looking? Quote their exact language.
The five most common objections or hesitations in lost deals: What specifically blocked the deal? Quote the language buyers used, not the rep's interpretation of it.
Customer-native language for the problem: How did buyers describe their pain before they knew about our product? List the specific words and phrases they used. Do not use any of our product language or category terms in this section.
What distinguishes won deals from lost deals: Identify 2 to 3 patterns that clearly separate the two groups. These might be firmographic, behavioral, or related to the buyer's stated situation.
Be specific and quote directly. If the transcripts don't support a finding, say so rather than inferring.
Transcripts to analyze: [paste call transcripts and summaries here]
Stack requirements: Call recording tool (Gong, Chorus, Fathom, or Fireflies) for transcript exports, CRM for tagging deals as won or lost. No integration needed for manual use. To automate: Gong API connected to Claude, triggered at end of quarter when deal status changes to closed-won or closed-lost.
Pro tips:
Include churned customers in the analysis, not just lost deals. The language in churn calls often reveals product-market fit gaps that messaging alone can’t address, and your marketing team needs to know about them.
Run it separately on enterprise and mid-market deals if you sell to both. The buying triggers and objections are frequently different enough that blending them produces less useful output.
Share the customer-native language section with your demand gen team immediately after you run it. Those phrases belong in your email subject lines, your ad copy, and your homepage, not in a research doc.
4. The Campaign Brief Writer
Outcome: A complete, specific campaign brief in about 10 minutes, with an objective, audience definition, core message, channel mix with rationale, content angle per channel, and a primary success metric. I’m still surprised at how many people skip campaign briefs, especially with AI at their fingertips.
Why this matters: A good brief from a senior person used to take a few hours. From a junior person, it took longer, and then you spent another hour in revisions explaining what you actually meant. Vague briefs produce generic creative, generic creative produces mediocre results, and mediocre results produce the “marketing isn’t driving pipeline” conversation you’ve had too many times.
How to run it: On-demand, whenever you’re starting a new campaign. Give it the audience segment, the product or feature you’re promoting, the business goal, and any relevant customer language or competitive context. What comes back is a structured brief you edit and refine, but the thinking scaffolding is done. Run it before your next campaign planning meeting and notice how the conversation changes when everyone comes in with a brief instead of a one-liner.
The prompt:
You are a senior B2B demand generation strategist writing a campaign brief for a marketing team.
I will provide: (1) the audience segment we're targeting with specific attributes, (2) the product or feature we're promoting, (3) the business goal for this campaign, and (4) any relevant customer language, competitive context, or recent market signals.
Write a complete campaign brief structured as follows:
Objective: [one sentence, specific and measurable] Target audience: [title, company type, buying stage, and the specific situation or trigger that makes them ready for this message] Core message: [the single most important thing this audience needs to hear, in their language] Supporting messages: [three messages that reinforce the core, each addressing a different concern or angle] Channel mix: [recommended channels with a one-sentence rationale for each, based on where this audience actually is] Content angle per channel: [the specific approach for each channel — not just "blog post" but the angle, format, and what the reader should walk away thinking] Primary success metric: [one metric, tied directly to the business goal] What this campaign is not: [one sentence on the wrong interpretation or misuse of this brief]
Keep the language direct and specific. Replace any marketing jargon with plain language a CFO could read and understand.
Campaign inputs: [paste your audience, product, goal, and context here]
Stack requirements: Claude.ai (nothing else required). For best results, feed in customer language from your Win/Loss Synthesizer before running this prompt. Optional: store your brand positioning doc and ICP definition in a Claude Project so the agent references them automatically on every run.
Pro tips:
Feed it 2 to 3 verbatim customer quotes from your Win/Loss Synthesizer before running the prompt. The brief will use buyer language instead of internal product marketing language, and the difference in the downstream creative is significant.
Use the output to align stakeholders before creative production starts. A brief that takes 10 minutes to generate and that everyone reviews is worth more than one that takes three days and gets rewritten in the kickoff meeting.
Add this line to the end of the prompt: “List the three assumptions in this brief that are most likely to be wrong.” It forces the agent to surface the weakest parts of the strategy before you commit to them.
5. The Content Repurposer
Outcome: Every long-form asset your team produces (webinar, article, case study, podcast) becomes six to eight pieces of channel-ready content without adding hours of work or headcount.
Why this matters: Every webinar your team runs should generate at least eight pieces of content. Most teams get one blog post if they’re organized about it, and the recording sits there generating nothing for the next year. The problem is that that repurposing feels like low-value work even though it’s where a lot of the distribution leverage actually is. The result is that your best thinking reaches a fraction of the audience it could.
How to run it: On-demand, right after publishing any long-form asset. Feed it the transcript or full text of your webinar, article, or case study, plus three examples of your actual writing. What comes back is a full set: three LinkedIn posts in your voice, two email subject line and body copy pairs, five ad headline variants, and one SEO-optimized content section ready to drop into an existing page. The three writing examples are not optional. Without them, the output defaults to generic marketing copy. With them, you’ll want to edit rather than rewrite.
The prompt:
You are a B2B content strategist who writes in a direct, conversational, first-person voice and specializes in turning long-form content into channel-specific assets without losing the original insight or voice.
I will provide: (1) a long-form piece of content to repurpose, and (2) three examples of my writing that establish my voice and style.
Using my writing examples as the style model, produce the following assets:
LinkedIn posts (3): Each 100 to 150 words. Pull the most insight-rich idea from the source material for each post. Write from a practitioner sharing what they've learned, not a brand promoting something. Each post should stand alone without requiring the reader to have seen the source.
Email subject line and body copy (2 sets): Subject line under 50 characters. Body copy 100 to 150 words. Each email should lead with a different angle from the source material and end with a clear, low-friction call to action.
Ad headline variants (5): Each under 10 words. Vary the angle across the five: one pain-focused, one outcome-focused, one curiosity-gap, one contrast, one direct.
SEO content section (1): 300 to 400 words on the primary topic of the source material. Write for a reader who found this page via search and knows nothing about us yet. Optimize for the question the content answers, not for the brand.
Match my voice exactly across all assets. No generic openers. No marketing jargon.
Source content and voice examples: [paste source material and three writing examples here]
Stack requirements: Claude.ai (nothing else required for basic use). To scale: connect Claude to your CMS or content library via API so assets are drafted and queued automatically on publish. Output can route to Notion, Google Docs, or directly into a content scheduling tool like Buffer or Sprout Social via Make or Zapier.
Pro tips:
Use your three best-performing pieces of content as your voice examples, not just any writing. Performance signals which of your approaches actually resonate with your audience.
Add this line to the end of the prompt: “Flag any claims in the source material that would need a supporting link or data source before publishing.” This catches unsupported assertions before they go out the door.
Run it separately for each channel rather than asking for everything at once. The LinkedIn post quality goes up noticeably when the prompt is focused on LinkedIn only, because the model isn’t splitting its attention across six formats.
6. The Competitive Monitor
Outcome: Every Monday morning, you have a brief on what changed in your competitive landscape last week: notable moves, what they signal, and whether anything requires a response before your team goes into deals.
Why this matters: Your positioning doesn’t exist in a vacuum. When a competitor raises a round, changes their pricing language, starts running ads against your best keyword, or accumulates a wave of negative G2 reviews in a category you own, your messaging is already out of date. Most marketing teams find out about these moves when a prospect mentions it on a discovery call. By then, a rep is already handling it without any support from marketing.
How to run it: Scheduled, runs every Sunday night. The agent reviews a defined set of competitors across four signals: G2 reviews from the past seven days (especially the negative ones, which tell you where their customers are hurting), press and news mentions, job postings as a proxy for where they’re investing, and any changes to their pricing or key positioning pages. What it returns is a one-page brief: what changed, what it might mean, and what (if anything) your team needs to do about it before the week starts.
The prompt:
You are a B2B competitive intelligence analyst producing a weekly brief for a marketing team.
I will provide recent data for a set of competitors: G2 and review site activity from the past 7 days, press mentions, job postings, and any observed changes to their pricing pages or key positioning language.
For each competitor, return:
Recent moves: [what changed or appeared in the past 7 days — be specific, cite the source] What it might signal: [your interpretation of why they made this move or what it suggests about their strategy — be direct about uncertainty when you have it] Response required: [Yes / No] — if yes, describe what our team should do and by when Sales rep alert: [one sentence a rep should know before walking into a competitive deal this week — or "Nothing new this week" if there's nothing material]
End the brief with a single "Watch closely" flag: the one thing across all competitors that you'd want the marketing leader to keep an eye on in the coming weeks.
Do not pad the brief. If a competitor had no notable activity this week, say so in one sentence and move on.
Competitor data to review: [paste G2 reviews, news mentions, job postings, and page change notes here]
Stack requirements: G2 review monitoring (G2’s notification tools or a scraping setup), news monitoring (Google Alerts for each competitor, or a tool like Mention or Crayon for automated feeds), LinkedIn for job postings (LinkedIn Sales Navigator or manual searches), page change monitoring (Visualping or Hexowatch for pricing and key pages). To fully automate: Clay is another strong option here. Also, Claude Code + Apify MCP gets you everything you need. Apify actors for news scraping, G2 review monitoring, and LinkedIn job postings. Claude Code runs weekly via cron, synthesizes across all competitors, and delivers the brief to Slack or email.
Pro tips:
Add your own product roadmap as context before running the prompt. The agent can then flag when a competitor’s new feature or messaging move overlaps with something you’re planning to ship and surface it as a timing consideration.
Don’t only monitor direct competitors. Add two or three analysts or journalists who write about your category. Their framing often foreshadows how buyers will start describing the problem in 90 days, and you want to be ahead of that shift.
Keep a running document that the agent appends to each week. After 8 to 10 weeks you have a genuine competitive timeline you can reference in QBRs and leadership reviews without having to reconstruct it from memory.
7. The Weekly Marketing Digest
Outcome: A 200-word executive summary of last week’s marketing performance, with interpretation, lands in your leadership team’s inbox every Monday at 8am, whether or not they open a single dashboard.
Why this matters: Your leadership team isn’t opening six different dashboards every Monday. Most of them aren’t checking them at all unless the numbers are surfaced for them. The result is that marketing’s story gets told in the weekly leadership meeting from memory or from a slide someone built at 9pm Sunday, and the nuance gets lost. A clean, interpreted summary that arrives before the meeting changes what gets discussed and how marketing gets positioned in the room.
How to run it: Scheduled, runs every Monday morning. The agent pulls your core metrics from the prior week (pipeline sourced by channel, email performance, top content, campaign spend vs. pacing, notable anomalies), interprets them, and writes a five-point summary. The interpretation is the part that matters. Not “CTR was 2.1%.” Something like “The enterprise email sequence underperformed by 40% against last week. Open rates held, so the issue is likely in the offer or the CTA, not the subject line.” That framing is the difference between a report and a briefing.
The prompt:
You are a B2B marketing analyst writing a weekly performance briefing for a senior executive audience that does not want raw data. They want to know what happened, what it means, and what (if anything) needs their attention.
I will provide last week's marketing metrics: pipeline sourced by channel, email campaign performance, top and bottom content by engagement, and paid campaign spend vs. pacing.
Write a five-point weekly marketing summary formatted exactly as follows:
Point [number]: [Metric or area] — [trending up / down / flat vs. prior week] — [one sentence of interpretation: what this likely means, not just what it is]
After the five points, add a single "This week's decision" line: the one thing that needs a leadership decision or action before Friday, if there is one. If there is nothing that needs leadership attention, write "No action required from leadership this week."
Keep the total under 200 words. Write in plain language. Replace any marketing acronyms with plain terms on first use.
Metrics to analyze: [paste last week's data here]
Stack requirements: Marketing automation platform (HubSpot or Marketo for email metrics), CRM (Salesforce or HubSpot for pipeline attribution), web analytics (Google Analytics or Amplitude for content performance), ad platforms (Google Ads and LinkedIn Ads APIs for spend and pacing). To automate: Claude Code + HubSpot MCP (for pipeline and email metrics) + Google Analytics Data API (called via a script). Scheduled as a Monday 7am cron job: Claude Code pulls last week's data via the MCPs, runs the analysis, and delivers the output to your Slack leadership channel or email inbox. The headless CLI mode (claude -p "prompt" < data.txt) is purpose-built for exactly this pattern.
Pro tips:
Include the prior week’s digest in the context window when you run the prompt. The agent can then flag week-over-week trends instead of just reporting point-in-time numbers, and leadership starts to see patterns rather than snapshots.
Ask the agent to add one “question leadership should be asking” based on the data. This is the most useful addition I’ve seen people make to this prompt. It shifts the digest from reporting to strategic framing.
Send the digest as the primary communication with a link to the dashboard for anyone who wants to dig in. The summary should do the job on its own. The dashboard is for the 10% who want more, not the other way around.
8. The QBR Narrative Builder
Outcome: You walk into QBR prep with a complete draft narrative (what happened, what it means, what we’re changing) two days earlier than you used to, which means two days to pressure-test it before you’re in the room.
Why this matters: QBR prep is one of the most time-consuming things a VP or above marketing leader does. You spend two days pulling data from five sources, building slides, and writing the narrative that connects the numbers to the strategy. The thinking takes about two hours. The assembly takes two days. And the part that matters most to leadership (what it means and what you’re changing) is often the part that gets the least time because the assembly ran long.
How to run it: On-demand, in the last week of each quarter. Give it your pipeline report, channel attribution, campaign performance summary, and any board-level context about the company’s goals for the quarter. What comes back is a first-draft narrative in three sections: what happened (honest about what missed), what it means for the business (including pipeline coverage risk going into next quarter), and what you’re changing (three specific adjustments with reasoning). You still do the thinking. You spend your time refining instead of starting from blank.
The prompt:
You are a B2B marketing strategist writing a QBR narrative for a senior leadership audience that includes the CRO, CEO, and potentially the board. This audience is comfortable with numbers but impatient with jargon, hedging, or vague action plans.
I will provide: last quarter's pipeline sourced by channel, campaign performance vs. stated goals, attribution data, and the company's GTM priorities for the quarter.
Write a marketing QBR narrative in three sections:
Section 1 — What Happened: A plain-language summary of marketing performance against goals. Be honest about what missed and why, based on the data I've provided. Avoid euphemisms. If a channel underperformed, name it and say by how much.
Section 2 — What It Means: The business implications. State the pipeline coverage position going into next quarter. Name any risk worth surfacing to leadership before it becomes a surprise. If something overperformed, explain why and whether it's repeatable.
Section 3 — What We're Changing: Three specific adjustments we're making next quarter. For each, name what we're changing, why (grounded in the data from this quarter), and what we expect the impact to be.
Keep the full narrative under 400 words. Write like a CFO will read it. After completing the narrative, add one line: "The question leadership is most likely to ask that this narrative doesn't yet answer is: [question]."
Quarter data to analyze: [paste pipeline, campaign, attribution, and goal data here]
Stack requirements: CRM for pipeline and attribution exports (Salesforce or HubSpot), marketing automation platform for campaign performance data, attribution tool if you have one. No integration needed for manual use. This one runs fine quarterly with a copy-paste workflow.
Pro tips:
Run it once before you’ve finalized your own thinking on the quarter, then run it again after. The first pass surfaces angles you might have missed. The second sharpens the narrative you’ve already formed.
The “question leadership is most likely to ask” addition to the prompt is the most valuable line in it. It pre-empts the hard questions before you’re in the room, and it forces the agent to find the gap in your narrative rather than leaving it to your CRO to find it for you.
Include the prior quarter’s narrative as context before you run the prompt. The agent can then write a narrative that shows directional progress over time instead of treating this quarter in isolation.
9. The ICP Drift Detector
Outcome: A quarterly analysis that shows exactly where your actual closed-won buyer profile has drifted from your stated ICP, which firmographic and behavioral signals are most predictive of close right now, and where your paid, ABM, and outbound targeting is probably misaligned.
Why this matters: Your ICP definition gets written once, lives in a Google Doc, and rarely gets updated. Meanwhile, your closed-won data is telling you something different every quarter: which job titles are actually buying, which company sizes are actually renewing, which use cases convert fastest. The targeting in your paid campaigns and outbound sequences is often based on a profile that’s 6 to 12 months stale. We’ve all been in the planning meeting where demand gen is defending a paid channel and RevOps is questioning whether the leads are actually converting. That conversation usually ends without resolution because nobody has pulled the comparison.
How to run it: On-demand, once per quarter. Export your last 30 to 50 closed-won deals from your CRM with firmographic data (title, company size, industry, source, deal size) and any behavioral signals you have (content consumed, sequences that converted). Feed that against your stated ICP definition. What comes back is a direct comparison: where the actual profile diverges from what you’ve said it is, the signals most correlated with close, and specific adjustments to your targeting worth considering.
The prompt:
You are a B2B go-to-market analyst reviewing closed-won data to identify whether our actual buyer profile matches our stated ICP and where our targeting should be adjusted.
I will provide: (1) our current ICP definition, and (2) a dataset of our last 30 to 50 closed-won deals including firmographics (title, company size, industry, geography), deal source, deal size, time to close, and any behavioral signals available (content engaged, sequences that converted, events attended).
Analyze the closed-won data against the ICP definition and return:
ICP accuracy assessment: Where does the actual closed-won profile match the stated ICP, and where does it diverge? Be specific about the attributes that differ.
Top predictive signals: The three firmographic or behavioral attributes most correlated with closed-won deals in this dataset. Explain the pattern you see.
Targeting misalignment: Where is our current paid, ABM, or outbound targeting likely chasing the wrong profile based on what this data shows? Name the specific programs or segments at risk.
Recommended ICP updates: The specific changes to the ICP definition that this data supports. Frame each as a testable hypothesis, not a definitive conclusion.
Be direct about what the data shows, even where it contradicts our current assumptions. Note anywhere the dataset is too thin to draw a reliable conclusion.
Closed-won data and ICP definition: [paste your data and ICP here]
Stack requirements: CRM (Salesforce or HubSpot) for closed-won deal exports, enrichment tool to fill in firmographic gaps if your CRM data is incomplete (Clearbit, Apollo, or ZoomInfo). No integration needed for manual use.
To automate: Claude Code + HubSpot MCP or Salesforce MCP + Clearbit/Apollo API for firmographic enrichment. Pull closed-won deals directly via the CRM MCP, enrich missing firmographic data via an API call, run the comparison against your ICP definition, and output the findings. Can be scheduled quarterly via cron or triggered manually from the CLI when you’re ready to run it. The Claude Agent SDK is a good fit here if you want to chain the CRM pull, the enrichment step, and the analysis into a single automated workflow.
Pro tips:
Run it separately for each major product line or use case if you sell across multiple segments. A blended analysis across very different buyer profiles produces output that’s true on average and useful for nobody specifically.
Ask the agent to flag where the ICP drift has implications for your demand gen budget allocation. That’s the finding that usually gets the most attention and creates buy-in for the targeting changes you want to make.
Share the output with sales leadership before you act on it. ICP drift findings land significantly better when sales leadership confirms they’re seeing the same pattern in their deals. It also creates shared ownership of the targeting changes instead of marketing making a unilateral call.
10. The Persona-Matched Email Sequencer
Outcome: A five-email nurture or outbound sequence tailored to a specific persona’s actual job pressures, objections, and language, instead of one sequence applied to everyone and optimized for average performance.
Why this matters: Most email sequences get written once, applied to every contact in the segment, and revisited only when the numbers are bad enough to force the conversation. A five-email sequence genuinely cannot speak to a VP of Sales worried about rep ramp time and a CMO worried about pipeline visibility at the same time. They have different jobs, different pressures, and completely different reasons to care about what you sell. Sending the same sequence to both means both sequences underperform and neither persona gets a message that actually fits their world.
How to run it: On-demand, for each key persona in your target mix. Give it the persona details (title, company type, pain profile, where they are in the buying journey), the product angle you’re positioning to them specifically, and any account signals or context you have. What comes back is a five-email sequence with subject lines and body copy for each email, mapped to a clear progression: open the pain, reframe it, introduce proof, handle the objection, close with a low-friction next step. Run it once per persona instead of once per campaign.
The prompt:
You are a B2B email copywriter who specializes in persona-specific outbound and nurture sequences. You write like a peer who understands the reader's world, not like a vendor who wants their time.
I will provide: (1) the target persona including title, company type, typical day-to-day pain, and where they are in the buying journey, (2) the specific product angle or value proposition I'm positioning to this persona, and (3) any account signals, recent news, or context that's relevant.
Write a five-email sequence. For each email, provide a subject line and a 150 to 200 word body.
Email 1 — Open the pain: Start with a situation this persona recognizes from their day-to-day work. Do not introduce the product. End with a question or an observation that makes Email 2 irresistible to open.
Email 2 — Reframe: Offer an insight or a different way of seeing the problem that this persona probably hasn't considered. This is where you earn the right to talk about a solution.
Email 3 — Proof: Introduce a specific outcome or result, not a feature. Ground it in a customer story or data point. Make it relevant to this persona's specific situation.
Email 4 — Handle the objection: Address the most common pushback this persona raises. Acknowledge it directly before responding to it.
Email 5 — Low-friction next step: Propose the easiest possible next step that matches where this persona is in their buying journey. Do not ask for a demo if they're not ready for one.
Constraints: No "I noticed you" openers. No "hope this finds you well." No generic value propositions. Every line should feel like it could only have been written for this specific type of person.
Persona details and context: [paste persona profile, product angle, and account signals here]
Stack requirements: Claude Code + HubSpot MCP, triggered by a contact entering a specific list or lifecycle stage. When a new contact matches your persona criteria in HubSpot, a webhook fires a Claude Code script that pulls their firmographic data, runs the sequencer prompt, and writes the five emails back to HubSpot as enrolled sequence drafts. This requires a bit more setup than the others (webhook listener + HubSpot write permissions via the MCP) but it’s one of the highest-leverage automations on the list once it’s running.
Pro tips:
Include 2 to 3 verbatim quotes from that persona’s call transcripts before running the prompt (from your Win/Loss Synthesizer output). The sequence will use their actual language instead of your internal language for their problem, and the open rate difference is real.
Build a sequence for your anti-persona too: the wrong-fit buyer you keep attracting. A sequence that quickly disqualifies poor-fit accounts is just as valuable as one that converts the right ones. It saves your team time and your database health.
After 30 sends, paste the reply rates and any replies you’ve received back into Claude and ask it to diagnose which emails are generating responses and which are being ignored. Use that to iterate the sequence before the next 30.
Wow, this turned out to be a long one. Honestly, I’ve been sitting on it for a few weeks because it was one that I know that I wanted to do, but I wanted to take my time to get it right.
So even though you had to wait, I hope it was worth it.
I’d love to hear some of the things you’re building. So if you have anything you’d like to share, feel free to drop a comment or DM me, most likely on LinkedIn. Would love to feature something that you’re doing in the future.



