MCP + AI-Assisted Website Intelligence

Claude Code SEO Audit: The Developer's Playbook

A practical walkthrough of using Evergreen's MCP Server inside Claude Code to audit a website. Configure the connection, run real queries, and build an AI-assisted audit workflow.

Published April 15, 2026
12 min read

Claude Code SEO audit: the developer's playbook

Claude Code can read your files, write your code, and run your tests. What it can't do — without a data source — is tell you that your site has 47 pages missing meta descriptions, that your blog section's Lighthouse scores dropped 15 points last week, or that three of your highest-traffic pages are returning soft 404s.

That's not an intelligence problem. It's a data problem. Claude Code is a capable reasoning engine with no website data to reason about. Model Context Protocol (MCP) fixes that. It connects Claude Code to external tools that provide the data — and Evergreen's MCP Server exposes your site's crawl data, content health metrics, Lighthouse scores, and search visibility data as queryable tools.

This guide walks through the full setup: configuring Evergreen's MCP Server in Claude Code, running your first queries, and building a practical audit workflow that combines AI reasoning with real website intelligence.

If you've used Claude Code for development work and want to extend it into SEO analysis, this is for you.

What MCP is (and what it isn't)

Model Context Protocol is a specification that lets AI applications call external tools and retrieve structured data. Think of it as an API contract designed specifically for AI assistants — Claude Code, Cursor, Claude Desktop — to interact with outside systems.

MCP is not an AI model. It's not a plugin. It's a standardized way for AI tools to discover, call, and consume data from external services. When Claude Code connects to an MCP server, it learns what tools are available (e.g., "get all pages with missing meta descriptions") and can call them during a conversation.

Evergreen's MCP Server exposes your website intelligence data — everything from the content audit table to Lighthouse performance scores — as MCP tools. Claude Code can query this data, analyze it, cross-reference it, and generate actionable recommendations based on your actual site, not generic advice.

For deeper coverage of how MCP fits into the broader AI-assisted SEO landscape, see Agentic SEO workflows: how AI agents transform site audits.

Prerequisites

You need three things before starting:

1. An Evergreen account with a crawled site. The free plan works for sites up to 500 pages. Sign up, add your site, and run the initial crawl. The MCP Server needs data to expose — it queries your existing crawl results.

You'll need an Evergreen account — the free tier works for this → Sign up

2. Claude Code installed. Claude Code is Anthropic's AI coding assistant that runs in your terminal. If you're reading this article, you probably already have it. If not, install it from Anthropic's documentation.

3. Your Evergreen API key. Available in your Evergreen project settings under Integrations → MCP. The key format is evg_xxxxxxxxxxxx.

How to configure Evergreen's MCP Server in Claude Code

Claude Code discovers MCP servers through a configuration file. The location depends on your setup, but the most common path is ~/.claude/claude_desktop_config.json.

Add the Evergreen MCP server to the mcpServers section:

{
  "mcpServers": {
    "evergreen": {
      "command": "npx",
      "args": ["-y", "@evergreen/mcp-server"],
      "env": {
        "EVERGREEN_API_KEY": "evg_xxxxxxxxxxxx",
        "EVERGREEN_PROJECT_ID": "your-project-id"
      }
    }
  }
}

Replace evg_xxxxxxxxxxxx with your actual API key and your-project-id with the project identifier from your Evergreen dashboard.

Restart Claude Code after saving the configuration. On the next launch, Claude Code will discover the Evergreen tools and make them available in your conversations.

Verifying the connection

Ask Claude Code to list the available tools:

What Evergreen tools do I have access to?

Claude Code should respond with a list of available MCP tools — things like get_audit_summary, get_pages_with_issues, get_lighthouse_scores, get_sitemap_structure, and others. If it doesn't see any Evergreen tools, check your configuration file path, API key, and project ID.

The tool surface: what you can query

Evergreen's MCP Server exposes tools that map to the core product features. Here's what you can ask for:

Audit summary. High-level site health: total pages crawled, pages with issues, average Lighthouse score, indexability status distribution.

Pages with specific issues. Filter the audit table by problem type: missing meta descriptions, missing H1s, broken links, noindex pages with traffic, thin content, duplicate titles.

Lighthouse data. Per-page performance scores, Core Web Vitals breakdown, worst-performing pages, score distribution across the site.

Site structure. The visual sitemap hierarchy as structured data — page depth, parent-child relationships, internal link counts.

Search visibility. If Google Search Console (GSC) is connected: impressions, clicks, average position, and click-through rate per page.

Traffic data. If Google Analytics 4 (GA4) is connected: sessions, engagement rate, and landing page performance.

Each tool returns structured data that Claude Code can parse, analyze, and use to generate recommendations. The data is always live — it reflects your most recent crawl.

Worked example: "Audit my site and tell me what to fix this week"

Here's a full worked example of an AI-assisted SEO audit using Claude Code and Evergreen's MCP Server. This isn't a toy demo — it's the actual workflow.

Turn 1: Get the overview

Look at my site's audit data from Evergreen and give me a summary of the
overall health. How many pages do I have, what's the average Lighthouse
score, and how many pages have critical issues?

Claude Code calls the get_audit_summary tool and returns something like:

Your site has 312 pages. Average Lighthouse performance score is 68. 23 pages have critical issues: 8 pages are returning 404 errors, 11 pages have missing meta descriptions on pages with >100 monthly sessions, and 4 pages are marked noindex but receiving organic traffic from GSC.

This is the equivalent of opening Evergreen's dashboard and reading the summary — but now it's in your terminal, in conversation context, and Claude Code can reason about it.

Turn 2: Prioritize by impact

Of those 23 pages with critical issues, which ones should I fix first?
Prioritize by traffic impact.

Claude Code calls get_pages_with_issues with a filter for critical issues, cross-references with traffic data, and returns a prioritized list:

Here are the highest-impact issues, sorted by monthly sessions:

  1. /pricing — 404 error, 2,400 sessions/month. This is urgent.
  2. /blog/getting-started — noindex tag, 890 sessions/month. Likely accidental.
  3. /features/visual-sitemap — missing meta description, 650 sessions/month.
  4. /blog/content-audit-guide — missing meta description, 520 sessions/month.

Notice what happened: Claude Code didn't just list issues — it prioritized them by combining audit data with traffic data. That's the cross-correlation that makes this workflow genuinely useful rather than a novelty.

Turn 3: Generate the fix

For the /pricing page returning 404, can you check what URL it should
redirect to? And for the noindex issue on /blog/getting-started, what
does the page's metadata look like?

Claude Code calls the relevant tools, retrieves page-level detail, and generates specific fix recommendations:

The /pricing page was likely moved. Your sitemap shows /pricing/plans as the current pricing page. A 301 redirect from /pricing to /pricing/plans would recover that traffic.

For /blog/getting-started, the page has <meta name="robots" content="noindex"> in the HTML head. The page title is "Getting Started with Evergreen" and it has 1,200 words of content. This looks like a valid page that should be indexed — the noindex tag is likely a leftover from development.

The workflow pattern

The three-turn workflow above — overview, prioritize, fix — is the core pattern for AI-assisted SEO auditing. It works because each turn narrows the scope:

  1. Broad scan — what's the overall state?
  2. Filtered view — what matters most?
  3. Specific action — what exactly do I do about it?

This mirrors how experienced SEOs work manually, but it compresses an hour of clicking through dashboards into a five-minute conversation.

Five more queries worth running

Once you're comfortable with the basic workflow, these queries demonstrate the range of what's possible.

Content thin spots

Find all pages with fewer than 300 words of body content that have at
least 50 monthly sessions from GSC. These might need expansion.

Internal linking gaps

Which pages on my site have zero inbound internal links? Are any of
them getting organic traffic despite being orphaned?

Lighthouse outliers

Show me the 10 pages with the worst Lighthouse performance scores.
For each one, what's the primary bottleneck — LCP, INP, or CLS?

Meta description audit

How many pages are missing meta descriptions? Of those, which ones
have the most search impressions? Those are the ones Google is
generating descriptions for — and they might not be good.

For a dedicated guide on finding and fixing meta description issues at scale, see How to find pages missing meta descriptions.

Before/after comparison

Compare my current crawl data to the previous one. What changed?
Any new 404s, new noindex pages, or significant Lighthouse score
changes?

The limits of AI-assisted SEO auditing

This section is the most important one in the article. The workflow described above is genuinely useful — but it has hard limits that you need to understand before relying on it.

Claude Code reasons, it doesn't verify

When Claude Code says "the noindex tag is likely a leftover from development," it's making an inference based on the data. It could be wrong. The noindex tag might be intentional. Always verify recommendations against your own knowledge of the site.

The data is only as fresh as your last crawl

Evergreen's MCP Server returns data from the most recent crawl. If you crawled last week and deployed changes today, the data is stale. On the free plan, crawls are manual. On Pro, daily syncs keep the data current. Either way, be aware of the freshness window.

Context windows have limits

If your site has 5,000 pages, Claude Code can't hold all of them in context simultaneously. The MCP tools return filtered, summarized data — not raw dumps of every page. This is a design choice, not a limitation: the tools are built to answer specific questions, not to replace the dashboard.

MCP is not agentic (yet)

The workflow above is interactive: you ask, Claude Code retrieves, you ask again. Claude Code doesn't autonomously decide to check your site every morning and flag issues. That's a future state. Today, MCP is a powerful query interface, not an autonomous agent. For more on where agentic workflows are heading, see Agentic SEO workflows.

It's a complement, not a replacement

Claude Code with Evergreen's MCP Server does not replace the Evergreen dashboard, a dedicated SEO review process, or the judgment of an experienced SEO professional. It's a faster way to ask questions and get data-backed answers. The decisions are still yours.

How this fits into a developer workflow

The real value of MCP-based SEO auditing isn't that it makes audits faster (though it does). It's that it puts audit data where developers already work — in the terminal, in the AI assistant they're already using for code.

Here's how this looks in practice for a developer maintaining a Next.js site:

Monday morning. Open Claude Code. Ask for a quick audit summary. See if anything broke over the weekend.

During a PR review. Ask Claude Code to check whether the pages affected by the PR have any existing SEO issues. Fix them in the same PR.

Before a deploy. Ask for the current Lighthouse baseline for the pages you're changing. After deploy, compare against the new scores.

During a sprint. When the SEO team files a ticket about "broken links on the blog," ask Claude Code to pull the specific pages and links instead of digging through the dashboard.

The point is integration into existing workflow — not a new workflow to learn.

Beyond Claude Code: Cursor and Claude Desktop

The MCP configuration pattern works across any MCP-compatible client. If you use Cursor as your primary editor, the configuration is similar — add the Evergreen MCP server to your Cursor settings and access the same tools from within your editor.

Claude Desktop supports MCP as well, which is useful for non-developers (content strategists, agency operators) who want the same query-based audit workflow without a terminal.

The tools and data are identical across all clients. The difference is the interface — terminal, editor, or desktop app.

What Evergreen's MCP Server does differently

Most MCP servers in the SEO space — DataForSEO, Ahrefs, Semrush — wrap third-party data. They give you keyword volumes, backlink profiles, and SERP data. That's useful, but it's data about the web in general, not data about your site specifically.

Evergreen's MCP Server exposes your own site's data: the pages you've crawled, the metadata you've published, the Lighthouse scores you've earned, the traffic your pages receive. It's the difference between asking "what keywords should I target?" and asking "what's broken on my site right now?"

You can vibe-code a crawler. You can't vibe-code institutional memory. The MCP Server makes that institutional memory — continuously maintained, historically tracked, cross-correlated — available to every AI tool in your stack.

Connect Evergreen to Claude Code in two minutes → Start free

Frequently asked questions

Does the MCP Server work with the free Evergreen plan?

MCP access requires the Pro plan ($49/mo). The free plan lets you crawl and audit a site, but MCP Server access is a Pro feature. You can start on the free plan to verify Evergreen works for your site, then upgrade when you're ready for MCP integration.

Can I use the MCP Server in CI/CD pipelines?

Not directly — MCP is designed for interactive AI assistants, not headless automation. For CI/CD integration, Evergreen's API is the better fit. However, you could use Claude Code in a scripted workflow that queries the MCP Server as part of a pre-deploy check.

How much data does a single MCP query return?

It depends on the tool. Summary tools return aggregate data (total pages, average scores, issue counts). Detail tools return per-page data, typically filtered to the specific issue you're querying. The tools are designed to return actionable amounts of data, not raw dumps.

Is my site data sent to Anthropic when I use the MCP Server?

The MCP Server runs locally (via npx) and connects directly to Evergreen's API. Your audit data passes through the MCP Server to Claude Code for analysis. Anthropic's standard data handling policies apply to the conversation itself, but the raw audit data stays between Evergreen and your local MCP Server instance.

Your next step: see your site data in Claude Code tonight → Create free account

Related Topics in MCP + AI-Assisted Website Intelligence