The Ultimate AEO Guide: 12 Steps to Show Up In LLMs
How to win brand mentions in the AI search era
I’ve spent the last few months thinking about how LLMs decide which brands to mention when someone asks “best tools for X” or “[competitor] alternatives”. And I know many of you have been doing the same because SEO is different in the world of AI.
These models pull from whatever sources get cited and referenced most often across the web. If those sources don’t mention you, the model can’t talk about you. Your website could be flawless and it won’t matter if nobody else on the internet is bringing you up.
Here are some interesting stats in the evolving SEO & AEO world.
Ahrefs ran a study across 75,000 brands, 76.7 million AI Overviews, and nearly a million prompts each on ChatGPT and Perplexity. Brand web mentions correlate with AI visibility at 0.664. Backlinks were at 0.218. Brands in the top 25% for web mentions get 10x more AI visibility than those in lower quartiles. That’s not a marginal difference. That’s a different sport.
Stacker Studio confirmed the other half of it. They studied 87 earned media stories across 30 clients and found a median 239% lift in AI search citations from earned media distribution compared to brand-owned content alone. 64% of AI citations came from third-party publishers, not the brand’s own site.
One stat that stuck with me was that 80% of URLs cited by ChatGPT, Perplexity, Copilot, and Google AI Mode don’t rank in Google’s top 100 for the original query.
So the new job of SEO and demand gen teams shifted from ranking higher to getting your brand mentioned in the articles, reviews, and Reddit threads that LLMs pull from when someone asks a buying question.
Here’s the tactical playbook for doing that.
How big is this change from SEO to AEO?
I want to put some numbers on this because we all hear the it’s a change, but rarely do we see the data.
Google AI Overviews now appear in roughly 48% of all queries. Semrush tracked 155% growth from Q1 to Q4 2025. They’re expanding into commercial and transactional queries too (up from 8% to nearly 19% of triggers), which means this isn’t limited to informational searches anymore. Google AI Mode has a roughly 93% zero-click rate. Gartner predicted traditional search volume would drop 25% by 2026, and current data suggests that’s tracking close.
ChatGPT has 700 million weekly active users. Perplexity grew 370% year over year. SparkToro’s data shows AI tools have nearly tripled their usage share in the past year. Monthly AI sessions are now 56% the size of traditional search worldwide.
The volume is still relatively small (AI referrals are roughly 1% of total website traffic), but the traffic is good. HubSpot’s 2026 State of Marketing report found 58% of marketers say visitors from AI tools convert at higher rates than traditional organic. ConvertMate pegged AI referral traffic at converting 3x better, with those visitors being 4.4x more valuable. So the channel is small but the signal is strong, and it’s growing fast.
Ok, that’s enough preamble. Let’s get to the meat of it. Here are the steps that you need to take to win this new game.
Step 1: Decide which questions you want to win
It’s no longer about keywords; it’s about the questions people ask.
The average ChatGPT prompt is 23 words. The average Google search is 3.4 words. People are typing full questions with context. “Best project management tool for a remote team of 15 with Jira migration support” is a different animal than “project management software.” Your question list needs to reflect that.
Open a spreadsheet and write down the buying intent questions people ask in your category:
“Best [category] tools for [audience]” “How do I [thing your product solves]?” “[Competitor] alternatives” “Which [product type] is best for [specific use case]?”
For each question, write 3-5 variants with different phrasing and different lengths. Add columns for Intent (comparison, how-to, problem-solution), Priority (1-3, where 1 = critical for revenue), and Current LLM Presence (do you show up today or not).
You’ll keep adding to this list as you pick up new prompts from sales calls, customer tickets, and social posts. Tools like ContentSage can speed this up by generating likely prompts from your website URL, but the manual work of mapping your category’s questions is worth doing yourself at least once.
One thing most people miss is that AI queries skew long tail. Semrush found roughly 60% of keywords triggering AI Overviews have 100 or fewer monthly searches. The questions that matter most for your brand are specific, high-intent queries that traditional keyword tools barely register.
Step 2: See what LLMs already say about you
Run those questions through the actual models, and record what comes back.
For each priority question, go to ChatGPT, Perplexity, and Google (with AI Overviews or AI Mode) and paste the question. You’re looking for two things: the answer content (does your brand appear? how are you described? who else shows up?) and the citations (every URL the model references).
Copy every cited URL into your spreadsheet: Question, Model, Cited URL. Do this across at least two models because the overlap between what different AI engines cite is shockingly low. Ahrefs found less than a 1 in 100 chance that ChatGPT recommends the same thing as Google (if that doesn’t tell you how much the game is changing, I don’t know what does.). Google AI Overviews skew toward big brands. ChatGPT rewards broad distribution and consistency. Perplexity leans into niche directories.
You’ll end up with a messy list. Good.
Now, run the same prompt twice, a week apart. AI search visibility is way less stable than organic rankings. 40-60% of cited sources change month to month. Rand Fishkin ran a study with 600 volunteers running 12 prompts through ChatGPT, Claude, and Google AI Overviews a combined 2,961 times. His conclusion: rank tracking in AI search isn’t useful. There’s no stable “position.” What matters is how frequently your brand appears across many prompts, not where you land on any single one.
A few tools can help automate the monitoring, like Otterly.ai (free tier, covers ChatGPT, Perplexity, Gemini, Copilot, AI Overviews), Peec AI (includes optimization recs), Profound (enterprise, 10+ AI engines, 400M+ real prompts dataset), Semrush (AI Overviews tracking built into existing platform), HubSpot AEO ($50/month standalone), and Ahrefs Brand Radar.
Step 3: Expand to LLM-adjacent pages via Google
LLMs don’t only cite the pages they show you directly. They draw on the same ecosystem Google surfaces. For AI engines that use retrieval-augmented generation (which is most of them right now), your Google ranking still feeds into what the model sees.
Lily Ray has been hammering this point and she’s right because ChatGPT relies on Google’s search index via RAG, undermining your Google SEO to chase AI-specific shortcuts means you lose AI visibility too. AEO requires good SEO as its foundation. You can’t skip ahead.
For each priority question, Google it in incognito. Collect URLs from the top 10-20 organic results. Skip homepages, product pages, and spam. Keep guides, explainers, comparison pages, and “best X tools” listicles. Add them to your spreadsheet with Discovery Source = Google and the SERP position.
These are the pages LLMs will pull from in future training data or retrieval passes. They expand your target list beyond what the AI models directly cited.
Step 4: Clean and score your source list
You now have a big ugly spreadsheet. Time to make it useful.
De-duplicate by URL. Add columns: Already Cited by LLM? (Yes/No), Top SERP? (Yes/No or rank), Page Type (guide, how-to, listicle, comparison, docs, review), Brand Already Mentioned? (Yes/No/Weakly).
Open each URL, skim it, fill in the columns. You’re separating the LLM relevant pages from noise.
Score them:
3 = LLM-cited AND top SERP (top targets) 2 = LLM-cited OR top SERP 1 = Relevant but neither
Filter to 2 or higher. That’s your hunting ground.
Two things to keep in mind as you score. ConvertMate found 83% of AI Overview citations come from pages outside the organic top 10 (so don’t dismiss lower-ranking pages). And pages with 20,000+ characters average 4.3x more citations than thin content. Long form guides are disproportionately valuable targets because LLMs have more material to cite from.
Step 5: Build your publisher and contact list
You know which pages matter. Now you need the humans behind them.
For each high priority URL, open the article and look for the byline, author bio, or About page. Search “[Author Name] + [Site Name] + LinkedIn” to find them. If you can’t find the author, look for the Head of SEO, Content Lead, or Marketing Manager at that company.
Use Hunter, Clay, or RocketReach for email addresses. Add Contact Name, Contact Role, Contact Email, and Confidence (High/Med/Low) to your spreadsheet. This is tedious and you’ll end up with half-complete entries. That’s normal. People change jobs. Bylines disappear. A “probable content manager, not sure” entry still gives you someone to email.
Step 6: Decide what you’re asking for
Not every page needs the same ask.
For listicles (”10 best tools for X”): are you missing? Ask to be added. Are you there but poorly described or buried at the bottom? Ask for an upgrade.
For how-to or explainer content: is there a natural spot for your product as a recommended solution or example? Could they add a “tools” section where you’d fit?
For comparison pages: are competitors listed and you’re not? Is the info about you outdated?
Add a column for Ask Type: listicle_add, listicle_upgrade, correction, add_recommendation, other. For each page, you’re figuring out the least intrusive, most helpful way to get included. Think of it as editorial planning for someone else’s content.
Step 7: Write and send outreach
The unglamorous part.
Create 2-3 core templates: one for listicle add/upgrade, one for corrections/updates, one for general “your article is great, we’d add value here.” Each template should reference their specific article by name, explain why your brand belongs with one or two concrete reasons, and make it easy by suggesting 1-2 sentences they could paste in with your URL and anchor text.
Load your CSV into a cold outreach tool (Smartlead, Apollo, Mailshake) with personalization tokens for first name, site name, article title, and suggested snippet. Initial email plus 1-2 follow-ups over 7-14 days.
Reply rates are low. Maybe 1-2% if your targeting is sharp. The personalization quality is everything. Over-automate and you sound spammy. Hand-edit every email and you run out of hours. I don’t have a clean answer for this tradeoff. You’ll find your own line.
Step 8: Handle replies and close
When replies come in, classify them fast: hard no, soft no, info request, positive interest, counter (cash or reverse mention).
Know your boundaries before you start. Budget for paid placements? Willing to swap mentions (you add them to your resource page, they add you to their guide)? Any compliance constraints?
For reverse mentions, find pages on your own site where you could reasonably mention them. Partner pages, resource round-ups, tool recommendation posts. Propose a clear trade: “You add us here with this blurb, we add you here with that blurb.”
Log every outcome: Reply Type, Deal Type (earned, paid, reverse, hybrid), Next Step with a deadline. Every publisher is different. You’re juggling threads and managing two workstreams if reverse deals are in play. Keep your spreadsheet current or it spirals.
Step 9: Deliver content that’s easy to paste in
When someone says “sure, send me something,” be the easiest person they work with that day.
Open their article again. Match their tone and structure. Write 1-3 sentences that describe your product and fit their format. Include the anchor text and URL you want.
For a listicle, it looks like: “[YourBrand] — A [short description] that helps [who] do [result]. Especially useful if you [specific use case].”
Suggest exactly where it fits: “We’d slot after [Competitor X] in the list” or “This could be a bullet under ‘Tools for [use case].’” Send a clean packet: suggested snippet, URL, anchor text, an optional alt version. Don’t paste the same generic blurb across 30 sites. It won’t fit half the time and publishers notice.
Step 10: Deliver on your commitments
If you agreed to add mentions on your own site, follow through. List all obligations with the page, the mention, and the deadline. Draft each snippet, implement it, verify the live page, and send a quick confirmation. Don’t let open commitments pile up. You need these publishers to keep working with you, and they will remember if you flaked.
Step 11: Verify mentions and track LLM impact
A placement doesn’t count until it’s live and indexable. Keep a verification tab: Publisher Page URL, Expected Anchor Text, Expected Target URL, Status (pending, live, broken).
For each “live” report, open the URL and confirm the link is there with the right destination and reasonable context. Every few weeks, re-run your core questions in ChatGPT and Perplexity. You’re looking for your brand appearing more often and new citations from pages where you earned mentions. There’s a lag between a mention going live and LLMs picking it up (training schedules and retrieval refresh cycles vary), so don’t expect it to show up overnight.
ConvertMate found content updated within 30 days gets a 3.2x citation multiplier. Recently refreshed pages where your mention appears get pulled into AI responses faster.
Step 12: Build your learning loop
This can’t be a one-off project.
Monthly, review your data. Which opportunities became live mentions? Which were LLM-cited? Which were high authority SERP pages? Which outreach templates got replies? Which angles closed easiest? Use that to refine your source targeting, your pitch, and your offer structure. The brands treating this as a continuous program are the ones seeing results compound.
What else you should be doing
The outreach playbook gets your brand into the sources LLMs cite, and that’s the core play. But I’ve been going through the latest research and there are several things I’d be doing on your own properties right now that multiply the outreach work.
I’ll start with the one that surprised me. ConvertMate found 44.2% of LLM citations come from the first 30% of a page’s content. That means you need to front-load your answers. Put the direct answer in the first 40-60 words of each section, then support it. I checked a few of my own client sites after reading this and we were burying the answer under three paragraphs of setup on almost every page. Also, 68.7% of cited pages follow a clean H1, H2, H3 heading hierarchy, and 61% use structured data markup (FAQ, HowTo, Product, Article schema). If your content team hasn’t done this yet, that’s where I’d start.
Original research is the single biggest differentiator for getting cited. The Optimist, a B2B content agency, documented a 4,900% revenue increase from LLM-referred sources over 14 months for one client. Their play was first-party research and proprietary datasets. Surveys, benchmarks, data nobody else had. Makes sense when you think about it. LLMs prefer citing primary sources over summaries of primary sources. If you can produce original data in your category, you skip the line.
This next one I didn’t see coming. YouTube mentions are the strongest correlator with AI visibility in the entire Ahrefs study at 0.737. Stronger than web mentions (0.664). Way above backlinks (0.218). Video content gets extracted through transcripts, descriptions, and metadata. I’ll be honest, I’ve been underweighting video for my clients and I’m rethinking that now. If you’re not producing video about your product category, you’re missing what the data says is the strongest signal.
Reddit too… I know, I know. But Tinuiti’s AI Citations Trends Report found Reddit accounts for 5%+ of all ChatGPT citations. On Perplexity, roughly 24% of total citations. Reddit content appears in 68% of AI answers across ChatGPT, Claude, Perplexity, and Gemini. Authentic participation in relevant subreddits (not buying accounts to seed content, which gets caught) is a real channel now. A negative Reddit thread about your brand can shape a buyer’s first impression through AI before you ever interact with them directly.
Freshness matters, but don’t game it. Pages updated within 30 days get a 3.2x citation multiplier, lut Lily Ray caught multiple cases where artificial refreshing (changing timestamps without adding anything new) got detected and penalized. Update because you have something worth adding.
Kevin Indig’s data changed how I think about content architecture. Focused pages covering 26-50% of a query’s subtopics outperform exhaustive guides that try to cover 100%. I would have guessed the opposite. A Backlinko case study on Descript confirmed it: audience-specific pages get 2.3x more AI citations than generic product pages. One audience, one use case, one problem per page. That’s the right unit.
Third-party coverage compounds everything. Conductor’s 2026 report (17 million AI responses, 100+ million citations) found brands with topical authority clusters see 2.5x higher citation rates. Third-party media coverage makes brands 5x more likely to be cited. And brands are 6.5x more likely to be cited through someone else’s domain than their own. Earned media, digital PR, podcast appearances, conference write-ups. The work that’s hard to scale is the work that compounds.
What not to do
I want to flag the growing list of AEO tactics that backfire. Lily Ray has been doing the best work tracking these and I’d pay attention.
Mass-producing AI-generated content targeting AI search queries leads to what she calls the “Mount AI” pattern: rapid traffic growth followed by algorithmic collapse, and there are multiple case studies on this now. Creating excessive “alternatives” or “vs” pages backfires too. Companies with 51+ of these pages saw both blog traffic and ChatGPT citations decline. Buying Reddit accounts to seed branded content gets caught. Adding hidden LLM instructions to your pages is a bad idea (Microsoft flagged this as “AI Recommendation Poisoning” in their February 2026 security report). And llm.txt files? No major LLM has confirmed using them.
Google’s John Mueller put it well at Google Search Live: “Good AEO still relies on good SEO. Tricks will come out and they will work for a short time.”
Eli Schwartz made a point that when everyone optimizes from the same AEO checklist, you’ve created a red ocean. Hundreds of companies end up following identical steps, producing identical content. The playbook in this issue gets you the mechanics. But original thinking about your category, your positioning, and your point of view is still what separates you from everyone else running the same checklist.
Where this leaves you
Traditional SEO = make your website rank higher.
AEO = make your brand appear across the sources LLMs trust.
Your website matters (structure it for extraction, create original research, keep it current). But it’s not enough by itself. You need to show up in the places AI models pull from when someone types a buying question into ChatGPT.
The outreach playbook is the spine of this. The on-site and off-site work multiplies the impact. Avoiding shortcuts that undermine your Google rankings (which AI models still rely on via RAG) keeps the whole thing working.
It’s a lot of work. It’s slow. And the teams that start building now will have a moat the rest of the market can’t copy by reading a blog post.
If you want help with all of this and want a way to streamline the process, check out Noble (no, this is not a paid ad – they’re a former client, and I love what they’re doing).



