Understanding Your AI Readiness Scores

Detailed breakdown of how AI readiness scores are calculated and how to improve each category.

Your AI Readiness Score is a weighted composite of four category scores. This page explains how each category is scored and what you can do to improve.

How Scoring Works

Each category produces a score from 0 to 100 based on the checks within it. The composite score is a weighted average:

CategoryWeight
Technical Accessibility30%
Content Engineering30%
Schema & Structured Data25%
Agent Readiness15%

Each individual check contributes to its category score. High-priority findings reduce the score more than medium or low-priority findings.

Technical Accessibility (30%)

This category measures whether AI systems can discover and parse your content.

Key Checks

CheckPriorityWhat It Measures
llms.txt presentHighWhether a valid llms.txt file exists at your domain root
llms.txt structureMediumWhether the file includes required sections (identity, expertise, content)
llms-full.txtLowWhether an extended version exists with detailed taxonomy
GPTBot allowedHighWhether robots.txt permits OpenAI's crawler
ClaudeBot allowedHighWhether robots.txt permits Anthropic's crawler
PerplexityBot allowedMediumWhether robots.txt permits Perplexity's crawler
H1 presentHighWhether each page has exactly one H1 tag
Page weightMediumWhether pages are lightweight enough for efficient crawling
Noindex checkHighWhether important pages are accidentally noindexed

How to Improve

  1. Create llms.txt at your domain root with your site identity, expertise areas, and key content links
  2. Update robots.txt to allow AI crawlers you want to be indexed by
  3. Fix heading structure — ensure every page has exactly one H1 with a clear hierarchy below it
  4. Reduce page weight — minimize JavaScript bundles and ensure content is server-rendered

Content Engineering (30%)

This category evaluates whether your content is structured for AI extraction and citation.

Key Checks

CheckPriorityWhat It Measures
Entity densityHighWhether content has ~20% proper nouns and specific terms
BLUF patternHighWhether pages lead with a direct answer
Statistics presentMediumWhether content includes specific, citable data points
Expert quotationsMediumWhether content includes attributed expert quotes
Authorship signalsMediumWhether author information is present and linked
Readability rangeMediumWhether Flesch score is 60-70 and Gunning Fog is 8-10
AI-ism countLowWhether content contains generic filler phrases
Content depthMediumWhether pages have sufficient word count for the topic

How to Improve

  1. Increase entity density — replace generic statements with specific names, numbers, and technical terms
  2. Apply BLUF pattern — rewrite opening paragraphs to lead with the direct answer
  3. Add statistics — include specific metrics, research findings, and benchmarks
  4. Add author bios — include credentials and expertise areas for content authors
  5. Remove AI-isms — replace "rapidly evolving landscape" and similar phrases with substantive content

Schema & Structured Data (25%)

This category validates the machine-readable markup that connects your content to the knowledge graph.

Key Checks

CheckPriorityWhat It Measures
JSON-LD presentHighWhether valid JSON-LD markup exists on each page
@id attributesMediumWhether entities have unique, resolvable identifiers
sameAs linksHighWhether your organization links to Wikipedia, Wikidata, etc.
knowsAboutMediumWhether expertise areas are declared in schema
dateModifiedHighWhether pages have recent modification dates (within 60 days)
Topic schemaLowWhether about/mentions properties describe content topics

How to Improve

  1. Add JSON-LD to every page with at minimum: @type, headline, author, datePublished, dateModified
  2. Add sameAs links to your Organization schema pointing to Wikipedia, Wikidata, and LinkedIn
  3. Add knowsAbout to Person and Organization schemas listing your expertise areas
  4. Keep dateModified current — update it whenever you make meaningful content changes
  5. Add @id to your primary entities for cross-page disambiguation

Agent Readiness (15%)

This category checks whether AI agents can interact with your site programmatically.

Key Checks

CheckPriorityWhat It Measures
MCP serverMediumWhether an MCP endpoint is available for AI tool access
AI crawler configsLowWhether fine-grained crawler-specific rules exist

How to Improve

  1. Set up an MCP server if you have API-accessible data that AI tools would benefit from
  2. Configure crawler-specific rules in robots.txt for granular AI crawler control

Maturity Levels

Your composite score maps to one of five maturity levels:

  • Leading (81-100) — Your site is well-optimized across all four pillars. Focus on maintaining scores and monitoring for changes in AI system behavior.
  • Optimized (61-80) — Strong foundation. Review category breakdowns to identify the one or two areas that would push you into Leading.
  • Structured (41-60) — Core elements are present but gaps exist. Focus on high-priority findings in Technical Accessibility and Content Engineering first.
  • Reactive (21-40) — Minimal optimization. Start with the foundation: llms.txt, robots.txt, and basic JSON-LD.
  • Unaware (0-20) — No AI optimization in place. Begin with the AEO/GEO guide for a structured approach.