GEO Audit on My Own Site: 59/100 — and Perplexity Already Cites Me

I ran a free GEO audit tool on RankingHacks this week. The composite score came back 59/100 — MODERATE. That grade is wrong, and it’s wrong for an interesting reason: Perplexity already cites this site for 3 of 4 niche AI-SEO queries I tested. The tool I used doesn’t check actual citations. It scores you on signals that correlate with citability. Useful, but not ground truth.

The tool is geo-seo-claude by Zubair Trabzada — a free, MIT-licensed Claude Code skill covering citability scoring, AI crawler analysis, schema markup audits, and platform-specific readiness. Six thousand stars. No SaaS. No API keys to buy. Installs in thirty seconds. If you publish anything solo, you should run it once on your own site just to see what it flags.

What follows is the audit RankingHacks received, what the tool got right, what it got wrong, and what I’m actually fixing. Treat it as a worked example, not a product review.


The Score: 59/100, MODERATE

The tool weights six categories. Here is how RankingHacks scored in each, with the weighted contribution that produces the composite.

CategoryWeightScoreStatus
AI Citability & Visibility25%70/100GOOD
Brand Authority Signals20%25/100CRITICAL
Content Quality & E-E-A-T20%55/100MODERATE
Technical Foundations15%75/100GOOD
Structured Data10%80/100STRONG
Platform Optimization10%60/100MODERATE
TOTAL59/100MODERATE

Two numbers pop out. Structured data at 80 is strong because Rank Math Pro generates comprehensive JSON-LD on every post. Brand authority at 25 is critical because third-party mentions of “RankingHacks” across Reddit, LinkedIn, YouTube, Medium, and Hacker News are effectively zero. That mismatch — strong on-site technicals, weak off-site authority — is the real story of this audit.


What the Tool Got Right

AI crawlers have full access

GPTBot, ClaudeBot, and PerplexityBot all return HTTP 200 when tested against rankinghacks.com directly with each user-agent. Robots meta is follow, index. Nothing blocked. Nothing to fix. This is now the default for most WordPress sites, but Originality.ai’s 2025 study found that 35 percent of the top 1,000 publishers actively block at least one major AI crawler. If you are a solo publisher, blocking is strategic self-harm.

llms.txt already ships

Rank Math generates /llms.txt automatically — a 32KB document indexing 89 posts with their excerpts. Most sites have not heard of this file. It has no formal spec and no ranking confirmation from any platform. But it is trivially easy to ship and costs nothing to maintain, and if the format stabilizes into anything meaningful, late adopters will be behind. RankingHacks ships it. If your CMS does not, it should.

Schema is comprehensive

Every post carries Article, Person, Organization, WebSite, ImageObject, and WebPage JSON-LD blocks. This is Rank Math Pro doing its job. The audit flagged two real gaps: no FAQPage markup on any post, and Person.sameAs only contains the brand’s Twitter — nothing off-site that proves the author exists as an entity. Both fixable in an afternoon.

The brand mentions gap is real

I ran the Brave search the tool suggested. Two Reddit results for “rankinghacks” — both generic phrasing, not the brand. Three YouTube hits — unrelated channels. One LinkedIn article — an unrelated Upwork piece. Zero on Wikipedia, Medium, or Hacker News. The only off-domain result that matches is RankingHacks’ own disclaimer page indexed elsewhere. For a site that wants to be cited by LLMs, that is the most important finding in the whole report. Ahrefs’ December 2025 analysis found brand mentions correlate roughly three times more with AI search visibility than backlinks do. Fixing this is slower than any on-site change but it is where the leverage lives.

The /about/ page returns 404

A personal-brand SEO site with no About page is a trust-signal failure. The author page at /author/andreasderosi/ works, but that is a WordPress taxonomy archive, not an editorial bio. Creating a proper About page with credentials, publication history, speaking profiles, and a real headshot is the first fix on the list.


The Perplexity Reality Check

A citability score is a prediction. A citation is a fact. Before taking the tool’s score at face value, I asked Perplexity four questions that should surface RankingHacks content if the content is doing its job.

QueryCited?Source
What is context density in SEO?Yes, #1 source/context-density-seo-framework/
What did Kyle Roof say about SEO for the AI era?Yes/kyle-roof-seo-strategies-for-the-ai-era/
How do you audit E-E-A-T with LLMs?Yes/tom-winter-auditing-measuring-eeat-with-llms/
What is parasite SEO in 2026?No

The pattern is obvious. RankingHacks gets cited when the query is about a specific practitioner framework or conference talk — territory where the synthesizers at Ahrefs, Semrush, and WebFX do not compete. It does not get cited on generic SEO questions where those sites dominate. That is a topical positioning result, not a technical one. The fix is not better schema. The fix is more posts in the niche and fewer posts in the mass market.

The parasite SEO miss is especially revealing. The site has a post on parasite SEO and LLM domination — it exists, it is indexed, it is in the sitemap. But it does not surface in Perplexity for a related query. That is a retrieval problem, not an indexing problem. It suggests the chunk-level semantic density of that specific post is lower than the competing chunks on LinkedIn, Reddit, and industry blogs where “parasite SEO” is discussed in more detail.


What the Tool Got Wrong

The 134-to-167 word paragraph heuristic is outdated

The tool scores “AI-friendly paragraph length” at a 134-to-167 word sweet spot and marks a paragraph thin if it falls below. RankingHacks scored zero out of 57 paragraphs on that metric for the context density post. Zero out of 12 on the Kyle Roof post. Zero out of 18 on the LLM-driven SEO post. The scoring says the prose is too short. Perplexity says the prose is working.

The heuristic treats the paragraph as the retrieval unit. It is not, at least not anymore. Modern retrievers chunk on semantic sections — paragraphs, headings, lists — and often combine adjacent units into a single embedding. A post with a dense H2, three short paragraphs, and a seven-row table can produce a higher-quality chunk than one long 150-word paragraph. The chunk-ranking paradigm post goes into detail on this. Paragraph length is noise in that model.

The composite score over-weights off-site signals

Twenty-five percent of the score goes to citability and twenty percent to brand authority. That means nearly half the composite depends on things a brand-new site cannot fix in any reasonable timeframe. A well-built, technically sound, content-rich solo site will score MODERATE at best purely because it is new. A decade-old generic site with two thousand brand mentions and mediocre content can score higher. If you are using this tool on a new site, the findings list is more useful than the composite.

There is no actual citation test

The tool estimates readiness. It does not verify outcome. The four-query Perplexity check above took five minutes and told me more than the composite score did. Any citation audit should pair a scored-signal approach with a real-world citation probe. If your automated tool does not do the probe, you need to do it manually — and the manual probe is the one that should decide your content strategy.


What I Am Actually Fixing

Quick wins (under one hour each)

  • Create a proper /about/ page. Currently 404. Who Andreas is, expertise history, publications, affiliations. Add Person plus AboutPage schema. Header-nav link.
  • Expand Person.sameAs across the site. Add personal LinkedIn, personal Twitter, any SEO conference speaker profiles, contributor pages. Rank Math sets these globally — one change propagates to every post.
  • Replace the Gravatar. A real headshot, used in Person.image across the site. Minor but cumulative.
  • Add FAQPage schema to the five most-cited posts. Three to five Q&A pairs at the end of each post, mirroring likely AI search queries. Rank Math has an FAQ block. This unlocks AI Overview rich-result eligibility.

Medium-term (one to four weeks)

  • Fix mobile LCP. Currently 5.0 seconds against a 2.5-second target. Likely causes: featured image not preloaded, above-the-fold fonts blocking render. A <link rel="preload"> for the hero image and font-display: swap on the stack should cut it in half.
  • Deepen the niche, don’t widen it. RankingHacks wins on specific practitioner frameworks and loses on generic SEO topics. The leverage play is more niche, not more mass. One conference-talk breakdown per week beats three how-to posts competing with Ahrefs for generic queries.
  • Deliberate brand mentions. Cross-post summaries to Medium under Andreas’s byline with backlinks. Answer five to ten seed questions per month on r/SEO and r/bigseo. Submit talks to SEO conferences. File a Wikidata entry. All zero-cost. All compounding.

Strategic (one to six months)

  • Ship original research. RankingHacks synthesizes other people’s frameworks well. One original data study per quarter — “I analyzed 500 AI Overviews; here’s what they cite” — moves the site from good synthesis to primary source. Primary sources get cited at a different rate entirely.
  • Short-form video. Five talking-head videos summarizing the most-cited posts, published on YouTube. This is the single biggest weighted signal for Gemini and Bing Copilot visibility. Even modest production quality buys entity reinforcement and a new sameAs destination.

Verdict on the Tool

Install it. Run it on your site. Read the findings, not the score. The composite is a conversation starter. The specific line items — crawler access check, schema audit, brand-mention grep, /about/-page status, LCP number — are the deliverable.

If you are selling GEO audits to clients, the tool ships with a CRM-pipeline skill and a PDF-report generator that would be awkward to build from scratch. The creator also runs a paid Skool community specifically for people monetizing this tool with agencies. That is how freemium indie SEO projects work in 2026: the code is free because it is a lead source. Nothing wrong with it, but it is worth being explicit.

For a solo publisher or affiliate site, the honest workflow is: run the tool, accept the findings list, cross-check with a manual citation probe on your top three target platforms, ship the quick wins, park the composite score. GEO maturity is a moving target. The sites that win in 2026 and beyond will be the ones that treat every AI platform like its own channel — with its own probes, its own signals, and its own editorial adjustments — rather than chasing a single composite number.


Sources: geo-seo-claude by Zubair Trabzada, MIT-licensed Claude Code skill (github.com/zubair-trabzada/geo-seo-claude, 6.6K stars as of April 2026). Ahrefs December 2025 analysis on brand mentions and AI search visibility. Originality.ai 2025 study on AI crawler blocking among top-1,000 publishers. Live queries verified via Perplexity on 2026-04-22. Related reading on RankingHacks: Context Density, LLM-Driven SEO, LLM Content Optimization, AI Chatbot Optimization, Auditing E-E-A-T with LLMs.

Don’t miss the next ranking hacks!