Skip to content
Industry

What Sources Do AI Engines Actually Cite? We Analyzed 18,000 Responses to Find Out

geobuddy.co
G
Research18,000+ Responses1,548 Brands

What Sources Do AI Engines Actually Cite? We Analyzed 18,000 Responses to Find Out

SEO tool blogs, Google properties, and educational sites dominate AI citations — while social media is virtually absent. A data study of 1,548 brands across ChatGPT, Claude, Gemini & Perplexity.

G
GeoBuddy Research
March 21, 202612 min read
18K+
AI Responses
1,548
Brands Tested
4
AI Engines
800+
Industries
AI engines connected to citation sources — visualization of how ChatGPT, Claude, Gemini, and Perplexity choose their sources

When ChatGPT recommends a CRM tool or Gemini suggests a skincare brand, where does that answer actually come from? What web pages, what domains, what type of content does AI consider trustworthy enough to cite?

We analyzed 18,000+ AI-generated responses across ChatGPT, Claude, Gemini, and Perplexity covering 1,548 brands in 800+ industries. We extracted every citation, tracked every mention, and reverse-engineered the content patterns that make AI engines trust certain sources over others.

Finding #1: Half of All Brands Are Invisible to AI

49.5% of the brands we tested received zero visibility across all four AI engines. When we asked AI about products and services in their category, nearly half the time the brand simply wasn't mentioned — by any engine.

Only 5.2% of brands achieved strong visibility (76+). If your brand isn't actively optimizing for AI discoverability, the odds are nearly coin-flip that AI doesn't know you exist.

Data visualization showing 49.5% of brands are invisible to AI engines

Brand Visibility Distribution

49.5% of 1,548 brands are completely invisible to AI engines

Invisible (0): 766Low (1-25): 359Medium (26-50): 196Good (51-75): 147Strong (76-100): 80

Finding #2: Each AI Engine Tells a Different Story

AI engines dramatically disagree about which brands to recommend. Only 6.7% of the time did all four engines agree to mention a brand. 65.7% of the time, no engine mentioned the brand at all.

12.7% of mentions were exclusive to a single engine — one AI recommended the brand, the other three didn't. Perplexity had 270 exclusive mentions, nearly 5× more than Gemini's 56.

Four AI engines diverging in different directions — representing disagreement on brand recommendations

Engine Agreement

How many engines agree to mention the same brand for the same prompt?

No engine mentions: 65.7%Only 1 engine: 12.7%2 engines: 7.2%3 engines: 7.7%All 4 agree: 6.7%

Exclusive Mentions by Engine

Brands recommended by only one engine — Perplexity is the most 'opinionated'

Finding #3: AI Engines Have Very Different Personalities

Gemini is the pickiest but most generous — it mentions brands least often (13.2%), but when it does, it ranks them highest (avg position 1.9) and with the most positive sentiment (0.75). Gemini doesn't “list” brands; it recommends them.

Perplexity is the most inclusive — it mentions brands 24% of the time, casting the widest net thanks to its live web search on 100% of queries.

Claude is the most cautious — lowest mention rate (19.3%) and lowest average sentiment (0.63).

Engine Mention Rates

How often each AI engine mentions brands when asked about their category

Engine Personality Profiles

Each AI engine has a distinct pattern of behavior

Finding #4: How AI Frames Your Brand Matters More Than Being Mentioned

When Gemini mentions a brand, 51% of the time it positions it as the primary recommendation. The other engines are more likely to list brands as “alternatives” — ChatGPT and Claude both do this 51-53% of the time.

Being mentioned by Gemini is effectively an endorsement. Being mentioned by ChatGPT often means you're the second or third option listed. The framing matters as much as the mention.

How AI Engines Frame Brands

Role distribution when a brand IS mentioned — Gemini endorses, others list alternatives

Finding #5: The Sources AI Trusts

When AI engines cite sources (321 instances across our dataset), a clear hierarchy emerges:

  1. Google properties dominate — ads, analytics, developers, and search account for 85 citations (26%).
  2. SEO tool blogs are AI's favorite — Semrush, Ahrefs, Moz, and Neil Patel collectively account for 55 citations (17%). Their long-form, data-rich comparison articles are exactly what AI loves to draw from.
  3. Educational institutions punch above their weight — Khan Academy, Coursera, and freeCodeCamp are cited despite being non-commercial.
  4. Almost zero social media — Reddit, Twitter, and Instagram are virtually absent from citations.
Citation chain showing documents and websites connected by luminous links — AI source trust hierarchy

Top Cited Domains by AI Engines

321 citations analyzed — SEO tools and Google properties dominate

GoogleSEO ToolEducationGov-backed

Finding #6: Response Depth Varies Wildly

Gemini writes 2.6× more than Claude on average (3,162 vs 1,225 chars). This isn't just verbosity — longer responses mean more context, more competitor mentions, and more nuance. A Gemini response might compare 8-10 brands in detail, while Claude's response covers 3-4.

Response Depth by Engine

Average response length — Gemini writes 2.6× more than Claude

Finding #7: Industry Is Destiny

B2B software dominates, DTC consumer brands are invisible. Team Communication (41.6 avg visibility) vs. Skincare (0.7). The gap is 60×.

Why? AI engines are trained on comparison articles, review sites, and technical documentation — content that B2B SaaS generates naturally. Fashion and skincare rely on visual content (Instagram, TikTok) and influencer marketing, none of which AI can synthesize.

For DTC brands, this is actually an opportunity. 91% of skincare brands are invisible — meaning there's virtually zero competition in AI search. The first mover in each DTC category gets a massive advantage.

Industry AI Visibility Ranking

B2B SaaS dominates; DTC consumer brands are almost entirely invisible

What This Means for Your Brand

1. Don't optimize for just one engine

93.3% of the time, engines disagree. A brand visible on Perplexity may be completely invisible on Gemini.

2. Aim for primary recommendation, not just mentions

Being “an alternative” is a consolation prize. Study what makes Gemini endorse (51% primary recommendation rate) vs. what makes ChatGPT list you third.

3. Create cite-worthy content

SEO blogs, comparison articles, and educational content get cited most. Write the definitive “Best X in [category]” content. Be the source that AI sources cite.

4. Monitor across all four engines

AI recommendations change as models update. What works today may not work next quarter. Check your brand free across all 4 engines.

Methodology

  • Dataset: 1,548 unique brands across 800+ industries
  • Engines tested: ChatGPT, Claude, Gemini, Perplexity
  • Total AI responses: 18,002
  • Time period: February–March 2026
  • Metrics: Visibility score (0-100), mention rate, ranking position, sentiment (-1 to 1), role classification, citation extraction

FAQ

What sources do AI engines like ChatGPT cite most?

The most-cited domains are SEO tool blogs (Semrush, Ahrefs, Moz — 17% of citations), Google properties (26%), and educational sites (Khan Academy, Coursera). Social media is virtually absent.

Do AI engines agree on which brands to recommend?

Only 6.7% of the time do all four engines agree. 65.7% of the time, no engine mentions the brand at all.

Which AI engine mentions brands most often?

Perplexity at 24%, followed by ChatGPT 21.7%, Claude 19.3%, and Gemini 13.2%. Gemini has the highest positive sentiment when it does mention a brand.

Share:
AI CitationsSource AnalysisChatGPTClaudeGeminiPerplexityResearchGEO

What Are AI Engines Saying About Your Brand?

Find out in 60 seconds — free check across ChatGPT, Claude, Gemini & Perplexity.

Check Your AI Visibility Free