Understanding Your AI Readiness Scores
Detailed breakdown of how AI readiness scores are calculated and how to improve each category.
Your AI Readiness Score is a weighted composite of four category scores. This page explains how each category is scored and what you can do to improve.
How Scoring Works
Each category produces a score from 0 to 100 based on the checks within it. The composite score is a weighted average:
| Category | Weight |
|---|---|
| Technical Accessibility | 30% |
| Content Engineering | 30% |
| Schema & Structured Data | 25% |
| Agent Readiness | 15% |
Each individual check contributes to its category score. High-priority findings reduce the score more than medium or low-priority findings.
Technical Accessibility (30%)
This category measures whether AI systems can discover and parse your content.
Key Checks
| Check | Priority | What It Measures |
|---|---|---|
| llms.txt present | High | Whether a valid llms.txt file exists at your domain root |
| llms.txt structure | Medium | Whether the file includes required sections (identity, expertise, content) |
| llms-full.txt | Low | Whether an extended version exists with detailed taxonomy |
| GPTBot allowed | High | Whether robots.txt permits OpenAI's crawler |
| ClaudeBot allowed | High | Whether robots.txt permits Anthropic's crawler |
| PerplexityBot allowed | Medium | Whether robots.txt permits Perplexity's crawler |
| H1 present | High | Whether each page has exactly one H1 tag |
| Page weight | Medium | Whether pages are lightweight enough for efficient crawling |
| Noindex check | High | Whether important pages are accidentally noindexed |
How to Improve
- Create llms.txt at your domain root with your site identity, expertise areas, and key content links
- Update robots.txt to allow AI crawlers you want to be indexed by
- Fix heading structure — ensure every page has exactly one H1 with a clear hierarchy below it
- Reduce page weight — minimize JavaScript bundles and ensure content is server-rendered
Content Engineering (30%)
This category evaluates whether your content is structured for AI extraction and citation.
Key Checks
| Check | Priority | What It Measures |
|---|---|---|
| Entity density | High | Whether content has ~20% proper nouns and specific terms |
| BLUF pattern | High | Whether pages lead with a direct answer |
| Statistics present | Medium | Whether content includes specific, citable data points |
| Expert quotations | Medium | Whether content includes attributed expert quotes |
| Authorship signals | Medium | Whether author information is present and linked |
| Readability range | Medium | Whether Flesch score is 60-70 and Gunning Fog is 8-10 |
| AI-ism count | Low | Whether content contains generic filler phrases |
| Content depth | Medium | Whether pages have sufficient word count for the topic |
How to Improve
- Increase entity density — replace generic statements with specific names, numbers, and technical terms
- Apply BLUF pattern — rewrite opening paragraphs to lead with the direct answer
- Add statistics — include specific metrics, research findings, and benchmarks
- Add author bios — include credentials and expertise areas for content authors
- Remove AI-isms — replace "rapidly evolving landscape" and similar phrases with substantive content
Schema & Structured Data (25%)
This category validates the machine-readable markup that connects your content to the knowledge graph.
Key Checks
| Check | Priority | What It Measures |
|---|---|---|
| JSON-LD present | High | Whether valid JSON-LD markup exists on each page |
| @id attributes | Medium | Whether entities have unique, resolvable identifiers |
| sameAs links | High | Whether your organization links to Wikipedia, Wikidata, etc. |
| knowsAbout | Medium | Whether expertise areas are declared in schema |
| dateModified | High | Whether pages have recent modification dates (within 60 days) |
| Topic schema | Low | Whether about/mentions properties describe content topics |
How to Improve
- Add JSON-LD to every page with at minimum: @type, headline, author, datePublished, dateModified
- Add sameAs links to your Organization schema pointing to Wikipedia, Wikidata, and LinkedIn
- Add knowsAbout to Person and Organization schemas listing your expertise areas
- Keep dateModified current — update it whenever you make meaningful content changes
- Add @id to your primary entities for cross-page disambiguation
Agent Readiness (15%)
This category checks whether AI agents can interact with your site programmatically.
Key Checks
| Check | Priority | What It Measures |
|---|---|---|
| MCP server | Medium | Whether an MCP endpoint is available for AI tool access |
| AI crawler configs | Low | Whether fine-grained crawler-specific rules exist |
How to Improve
- Set up an MCP server if you have API-accessible data that AI tools would benefit from
- Configure crawler-specific rules in robots.txt for granular AI crawler control
Maturity Levels
Your composite score maps to one of five maturity levels:
- Leading (81-100) — Your site is well-optimized across all four pillars. Focus on maintaining scores and monitoring for changes in AI system behavior.
- Optimized (61-80) — Strong foundation. Review category breakdowns to identify the one or two areas that would push you into Leading.
- Structured (41-60) — Core elements are present but gaps exist. Focus on high-priority findings in Technical Accessibility and Content Engineering first.
- Reactive (21-40) — Minimal optimization. Start with the foundation: llms.txt, robots.txt, and basic JSON-LD.
- Unaware (0-20) — No AI optimization in place. Begin with the AEO/GEO guide for a structured approach.
