The Technical SEO Audit Guide

How to Run a Bulk Lighthouse Test on Your Entire Site

Stop testing one page at a time. Run Lighthouse across your entire site to find the pages dragging down performance — and fix them systematically.

Published April 15, 2026
8 min read

How to run a bulk Lighthouse test on your entire site

Lighthouse tests one page at a time. That's fine if your site has five pages. It's a problem if your site has 500 — or 5,000. The homepage might score 95 while your blog archive scores 38, and you'd never know unless you tested both.

The average score is a lie. What matters is the distribution: how many pages are fast, how many are slow, and which slow pages actually receive traffic. Site-wide Lighthouse testing surfaces that distribution, and once you see it, you can prioritize performance work based on impact rather than guesswork.

This guide covers three ways to run bulk Lighthouse tests — manual, CLI, and SaaS — and when each makes sense.

Why single-page Lighthouse tests mislead

When someone says "our Lighthouse score is 92," they almost always mean "the homepage score is 92." That number tells you very little about the site. Here's why:

Template variance. A site with four page templates (homepage, listing page, detail page, blog post) has four meaningfully different performance profiles. The homepage is almost always the fastest because it gets the most optimization attention. The blog post template — loaded with third-party embeds, unoptimized images, and tracking scripts — is almost always the slowest.

Third-party script loading. Many third-party scripts (chat widgets, analytics, A/B testing tools) load conditionally. They may not fire on the homepage but fire on every product page. Single-page tests miss this.

Content weight. A page with three images and 500 words performs differently from a page with 40 images and 5,000 words, even if they share a template. Bulk testing shows you which specific pages have content-driven performance problems.

The "average vs worst" trap. If 90% of your pages score above 80 and 10% score below 30, your average is fine but your tail is terrible. Those slow pages may be the ones with the most organic traffic. Site-wide testing exposes the tail.

Method 1: Manual Lighthouse (single page)

Best for: Quick spot checks. Debugging a specific page. Sanity-checking after a deployment.

Open Chrome DevTools → Lighthouse tab → select categories → run. Or use PageSpeed Insights for a web-based version that includes both lab data and field data (CrUX).

Limitations. One page at a time. No historical comparison. No way to see distribution across the site. Results vary between runs due to network conditions, CPU load, and Chrome extension interference.

When to use it. After fixing a specific performance issue, to verify the fix worked on that page. Never use it as your only performance measurement.

Method 2: Unlighthouse CLI

Best for: Developers comfortable with the command line. One-time audits of sites you control. CI/CD integration.

Unlighthouse is an open-source CLI tool that crawls a site and runs Lighthouse on every discovered page. It generates an HTML report with scores sorted by page.

# Install and run
npx unlighthouse --site https://example.com

The output is a local HTML report with per-page scores, filterable by category (performance, accessibility, SEO, best practices).

Strengths. Free. Open source. Runs locally — no data leaves your machine. Generates a comprehensive report in one command. Supports custom Lighthouse configurations.

Limitations. Requires Node.js. Runs on your machine, so results depend on your CPU and network. No persistent history — each run generates a new report with no comparison to previous runs. No visual sitemap overlay. For large sites (1,000+ pages), runs can take hours.

When to use it. When you need a one-time, comprehensive performance snapshot and you're comfortable with the terminal.

Method 3: SaaS platforms (continuous monitoring)

Best for: Ongoing performance monitoring. Teams that need historical trends. Agencies managing multiple client sites.

Several SaaS tools run Lighthouse at scale with persistent history, alerts, and dashboards:

ToolWhat it does wellLimitation
DebugBearDetailed CWV tracking, real user monitoring, CI integrationPricing scales per page count
CalibrePerformance budgets, custom metrics, team workflowsSteeper learning curve
TreoCrUX data visualization, competitive benchmarkingFocused on field data, less lab data
EvergreenSite-wide Lighthouse with visual sitemap overlay, audit table integration, shareable reportsNewer platform

The advantage of SaaS is persistence. You see how scores change over time, correlate performance drops with deployments, and set alerts for regressions. The disadvantage is cost — though for agencies, the time savings on manual testing and report generation typically covers the subscription.

How to interpret site-wide Lighthouse results

Raw scores are not the end goal. Here's what to actually do with the data once you have it.

Sort by traffic, not by score

A page with a Lighthouse score of 30 and zero traffic is a low priority. A page with a score of 65 and 10,000 monthly sessions is urgent. Sort your results by traffic first, then filter to pages below your performance threshold.

Identify template-level patterns

If every blog post scores below 50 but every product page scores above 80, the problem is the blog post template, not individual blog posts. Fix the template and every page using it improves. Template-level fixes have the highest leverage.

Separate what you control from what you don't

Third-party scripts (analytics, chat widgets, tag managers) are often the biggest performance offenders. You may not be able to remove them, but you can:

  • Load them asynchronously
  • Defer non-critical scripts until after interaction
  • Use requestIdleCallback for low-priority initialization
  • Consider lighter alternatives (Plausible instead of GA4, for example)

Track Core Web Vitals specifically

Of the four Lighthouse categories (Performance, Accessibility, SEO, Best Practices), the Performance score is the most actionable because it maps to Google's Core Web Vitals ranking signals. Focus on:

  • LCP (Largest Contentful Paint): Under 2.5 seconds. Usually an image optimization or server response time problem.
  • INP (Interaction to Next Paint): Under 200 milliseconds. Usually a JavaScript execution problem.
  • CLS (Cumulative Layout Shift): Under 0.1. Usually a missing dimension attribute or dynamic content injection problem.

Watch for score variance

Lighthouse scores fluctuate between runs. A score of 72 on one run and 68 on the next is noise, not signal. Look at trends over three or more runs before drawing conclusions. If a page consistently scores below your threshold across multiple runs, it's a real problem.

A practical workflow for bulk Lighthouse auditing

Here's the workflow that produces the most actionable results with the least wasted effort:

Step 1 — Crawl and score. Run a full-site Lighthouse audit. Record scores for every page.

Step 2 — Sort by impact. Cross-reference performance scores with traffic data (GA4 sessions or Search Console clicks). Sort by low score + high traffic — these are your highest-impact fixes.

Step 3 — Diagnose at the template level. Group pages by template type. If a template is consistently slow, fix the template rather than individual pages.

Step 4 — Fix and re-test. After implementing fixes, re-run Lighthouse on the affected pages. Compare against the previous scores to verify improvement.

Step 5 — Set up monitoring. Schedule automated re-testing on a daily or weekly cadence. Set alert thresholds so you catch regressions before they affect rankings.

How Evergreen handles bulk Lighthouse testing

Evergreen runs Lighthouse across your entire site as part of its standard crawl. Every page gets a performance score, and the results appear in the audit table alongside your other SEO data — metadata, indexability, internal links, content metrics.

The audit table lets you sort and filter Lighthouse results the same way you filter any other audit data. Sort by worst-performing pages, filter to pages with traffic above a threshold, or isolate specific CWV metrics.

The visual sitemap overlays performance data onto your site structure, so you can see at a glance which sections of your site are fast and which are slow. This is particularly useful for identifying template-level patterns — if an entire branch of the sitemap is red, the shared template is the likely cause.

Shareable report URLs include Lighthouse data, so you can send performance results to clients or developers without exporting CSVs or building slide decks.

On the Pro plan, daily syncs re-run Lighthouse automatically, so you see performance trends over time without manual re-testing.

Run site-wide Lighthouse in Evergreen → Start free

Frequently asked questions

How long does a bulk Lighthouse test take?

It depends on the site size and the tool. Unlighthouse on a 200-page site typically takes 15–30 minutes on a modern machine. SaaS tools vary, but most complete a 500-page site within an hour. Evergreen runs Lighthouse as part of its crawl, so results are available when the crawl finishes.

Are bulk Lighthouse scores different from individual test scores?

The methodology is identical — the same Lighthouse engine runs on each page. The difference is operational: bulk testing reveals the distribution across your site, which single-page testing cannot. Individual scores may vary slightly between runs due to environmental factors, but the relative ranking of pages stays consistent.

Should I aim for a perfect 100 score on every page?

No. A perfect score is possible on a static HTML page with no third-party scripts, but rare on a real production site. The goal is to meet Core Web Vitals thresholds (LCP < 2.5s, INP < 200ms, CLS < 0.1) on the pages that matter most — the ones receiving traffic and generating conversions. Chasing 100 on every page is diminishing returns.

How often should I re-run site-wide Lighthouse tests?

For actively maintained sites: weekly at minimum, daily if your deployment cadence is frequent. For stable sites with infrequent changes: monthly is sufficient. The real answer is "after every deployment that touches templates, scripts, or infrastructure" — and automated monitoring handles that.

Your next step: see your site-wide performance data → Create free account

Related Topics in The Technical SEO Audit Guide

The Technical SEO Checklist for 2026

A practical technical SEO checklist covering crawlability, indexation, Core Web Vitals, structured data, JavaScript rendering, and AI search visibility — updated for 2026.

How to Find and Fix All Broken Links on Your Site

A practical guide to finding, prioritizing, and fixing broken links across your website to improve user experience and SEO performance.

The Complete Website Audit Checklist for Agencies (2026)

A 25-point website audit checklist built for agencies managing multiple client sites. Covers structure, content, performance, and reporting workflows.

Next.js SEO Audit Checklist for 2026

An auditor's checklist for Next.js 14+ sites built on the App Router. Covers metadata, rendering strategies, dynamic routes, and the technical pitfalls that don't show up in generic SEO guides.

How to Find Noindex Pages Blocking Your Rankings

Accidental noindex tags silently remove pages from Google. Here's how to find every noindex directive on your site — and tell the intentional ones from the mistakes.

Technical SEO Audit Guide for Headless Websites

Headless websites separate content from presentation, and that separation introduces SEO audit challenges that monolithic sites don't have. This guide covers the methodology for auditing any headless stack.

The Comprehensive Astro SEO Checklist

Astro ships fast HTML by default, but fast isn't the same as optimized. This checklist covers every SEO consideration specific to Astro 4.x+ — from Islands to View Transitions to content collections.

Lighthouse Score for Your Entire Site: Tools and Methods

Lighthouse tests one page at a time. Here are five ways to get scores for every page on your site — from free CLI tools to SaaS dashboards — and when each approach makes sense.

Automated SEO Monitoring: Set Up Daily Site Audits

One-off audits find problems after they've already cost you traffic. Continuous monitoring finds them as they happen. Here's how to set up daily automated SEO monitoring that catches regressions before rankings suffer.

Shareable SEO Reports: How to Send Audits Clients Actually Read

Most SEO reports are PDFs that clients download, glance at, and forget. Shareable URL-based reports stay current, require no login, and get acted on. Here's why and how.

JavaScript Rendering Audit Checklist

A checklist for auditing JavaScript-rendered pages: crawl accessibility, metadata after render, lazy-loaded content, and the tools to verify what Google actually sees.