Lighthouse Score for Your Entire Site: Tools and Methods
Lighthouse tests one page at a time. Here are five ways to get scores for every page on your site — from free CLI tools to SaaS dashboards — and when each approach makes sense.
Lighthouse score for your entire site: tools and methods
Lighthouse scores one page per run. If you want scores for every page on a 500-page site, you need tooling that automates the process — crawling the site, running Lighthouse on each URL, and aggregating the results into something you can actually act on.
This guide compares five approaches to getting site-wide Lighthouse scores, from manual one-offs to automated SaaS. If you've already read our guide on how to run bulk Lighthouse tests, this is the companion piece — that covers the methodology, this covers the tooling.
Why one-page-at-a-time doesn't work
You already know Lighthouse exists. You've probably run it dozens of times via Chrome DevTools or PageSpeed Insights. The problem isn't running it — it's running it at scale.
A site with four page templates has four distinct performance profiles. The homepage is nearly always the fastest because it receives the most optimization attention. Blog posts with embedded videos, image galleries, and third-party comment widgets are nearly always the slowest. Testing only the homepage tells you how the homepage performs, nothing more.
The useful metric isn't "what is our Lighthouse score" — it's "what is the distribution of Lighthouse scores across every page, weighted by traffic." That requires site-wide testing, which means tooling.
Tool 1: Manual Lighthouse (Chrome DevTools and PageSpeed Insights)
What it is. The built-in Lighthouse auditor in Chrome DevTools, or the web-based PageSpeed Insights interface. One URL at a time.
How to use it. Open DevTools → Lighthouse tab → select categories → run audit. Or paste a URL into PageSpeed Insights. Both use the same Lighthouse engine. PageSpeed Insights additionally includes CrUX (Chrome User Experience Report) field data when available.
Strengths. Free. No setup. Immediate results. PageSpeed Insights includes real-user data alongside lab data. Good for verifying a fix on a specific page.
Limitations. One page at a time. No persistent history. No way to compare across pages or track trends. Results fluctuate between runs due to CPU load, network conditions, and browser extension interference. Functionally useless for site-wide assessment.
Best for. Spot-checking a single page after deploying a fix. Debugging a specific performance issue. Not for site-wide scoring.
Tool 2: Unlighthouse (open-source CLI)
What it is. Unlighthouse is an open-source Node.js CLI that crawls a site and runs Lighthouse on every discovered URL. It outputs an HTML report with per-page scores.
How to use it.
npx unlighthouse --site https://example.com
The report opens in your browser with scores for every page, filterable by Lighthouse category (Performance, Accessibility, SEO, Best Practices).
Strengths. Free and open source. Runs entirely on your machine — no data leaves your environment. Supports custom Lighthouse configurations. One command produces a comprehensive report. Good CI/CD integration story.
Limitations. Requires Node.js. Performance depends on your machine's CPU and network speed — a 1,000-page site can take hours. No persistent history between runs. No visual sitemap overlay. Can struggle with JavaScript-rendered sites that require authentication.
Best for. Developers comfortable with the terminal who need a one-time, comprehensive audit. CI/CD pipelines that gate deployments on performance budgets.
Tool 3: lighthouse-batch (lightweight CLI)
What it is. A simpler CLI tool that runs Lighthouse against a list of URLs you provide, rather than crawling to discover them. Less automated than Unlighthouse, but more controllable.
How to use it.
npx lighthouse-batch -s https://example.com/,https://example.com/blog/,https://example.com/about/
Or pass a file of URLs:
npx lighthouse-batch -f urls.txt
Strengths. Precise control over which pages get tested. Faster than crawl-based tools when you only care about a subset of pages. Lightweight.
Limitations. Requires you to maintain a URL list manually. No crawl discovery — if you don't list a URL, it doesn't get tested. No built-in report visualization.
Best for. Testing a specific set of high-priority pages (top 50 by traffic, key landing pages, template representatives) rather than the entire site.
Tool 4: SaaS performance monitoring platforms
What they are. Paid services that run Lighthouse at scale with persistent history, dashboards, alerts, and team collaboration. The notable options:
| Platform | Standout feature | Starting price |
|---|---|---|
| DebugBear | Real-user monitoring + lab testing combined, CI integration | From ~$40/mo |
| Calibre | Performance budgets, custom metrics, team workflows | From ~$50/mo |
| SpeedCurve | Competitive benchmarking, filmstrip comparisons | From ~$55/mo |
| Treo | CrUX data visualization, competitive field data analysis | Free tier available |
Strengths. Persistent history lets you track trends over weeks and months. Alerting notifies you of regressions before they affect rankings. Dashboards are shareable with non-technical stakeholders. Real-user monitoring (where available) complements lab data.
Limitations. Cost scales with page count and test frequency. Each platform has its own learning curve. You're sending your URLs to a third party. Most focus on performance specifically — they don't combine Lighthouse data with SEO audit data, content metrics, or site structure.
Best for. Teams that need ongoing performance monitoring with historical trends, deployment regression detection, and stakeholder-friendly reporting.
Tool 5: Integrated audit platforms
What they are. Tools that run Lighthouse as part of a broader site audit — combining performance data with SEO attributes, content metrics, and site structure in a single view. Evergreen falls in this category.
How they differ from dedicated performance tools. A dedicated performance platform tells you that /blog/old-post has a Lighthouse score of 42. An integrated audit platform tells you that /blog/old-post has a score of 42, a missing meta description, 3 internal links, 200 words of content, and 0 organic sessions in the last 30 days. The performance data gains context.
Strengths. You can sort and filter Lighthouse scores alongside other audit data. "Show me pages scoring below 50 that have more than 100 organic sessions" is a single filter. The visual sitemap overlay shows which site sections are slow at a glance. Shareable reports include performance data alongside everything else.
Limitations. Less granular performance diagnostics than a dedicated tool like DebugBear. No real-user monitoring (lab data only). You're trading depth of performance analysis for breadth of site intelligence.
Best for. Teams running content audits, technical SEO audits, or agency client reports that need performance data in context rather than in isolation. Teams that want to answer "which slow pages actually matter?" rather than "which pages are slow?"
How to choose the right approach
The decision depends on three factors: what you need the data for, how often you need it, and how many people will consume it.
| Need | Best tool |
|---|---|
| Verify a fix on one page | Manual Lighthouse / PageSpeed Insights |
| One-time full-site audit (developer) | Unlighthouse CLI |
| Test a curated list of priority pages | lighthouse-batch |
| Ongoing performance monitoring with trends | DebugBear, Calibre, or SpeedCurve |
| Performance data combined with SEO audit data | Evergreen or similar integrated platform |
| CI/CD performance gating | Unlighthouse or DebugBear CI integration |
| Agency client reporting | Integrated platform with shareable reports |
For most teams doing regular website audits, the integrated approach saves the most time — you don't need to cross-reference performance data from one tool with SEO data from another. For teams with dedicated performance engineering functions, a specialized platform like DebugBear or Calibre provides deeper diagnostics.
How Evergreen handles site-wide Lighthouse scoring
Evergreen runs Lighthouse across every page during its standard crawl. The results appear in the audit table as sortable, filterable columns — Performance score, LCP, INP, CLS, plus the Accessibility, SEO, and Best Practices category scores.
You can filter to pages below your performance threshold, sort by organic traffic to prioritize fixes by impact, and group by URL directory to identify template-level patterns. The visual sitemap shows performance distribution across your site structure — red branches indicate slow sections worth investigating at the template level.
Shareable report URLs include Lighthouse data alongside all other audit metrics, so client-facing performance reports don't require a separate tool or a slide deck.
On the Pro plan, daily syncs re-run Lighthouse automatically. You see score trends over time and catch regressions within 24 hours of deployment.
Site-wide scores in one dashboard → Start free
Frequently asked questions
Do all these tools use the same Lighthouse engine?
Yes. Unlighthouse, lighthouse-batch, DebugBear, Calibre, and Evergreen all run the same open-source Lighthouse engine maintained by Google. Scores may differ slightly between tools due to environmental factors (CPU, network, geographic location of the test runner), but the methodology is identical.
Should I use lab data or field data for site-wide scoring?
Both, but for different purposes. Lab data (what Lighthouse produces) is consistent and testable — good for debugging and comparing pages. Field data (CrUX, available via PageSpeed Insights and some SaaS tools) reflects real user experience — good for understanding actual performance. If you can only pick one for site-wide analysis, start with lab data because it's available for every page, while field data requires sufficient real traffic.
How often should I re-run site-wide Lighthouse tests?
At minimum, after every deployment that touches templates, scripts, or infrastructure. For actively maintained sites, weekly or daily automated runs catch regressions before they compound. For the full methodology on interpreting and acting on the results, see the bulk Lighthouse testing guide.
Can Lighthouse scores affect my Google rankings?
Core Web Vitals — LCP, INP, and CLS — are confirmed Google ranking signals, though they're a lightweight factor compared to content relevance and backlinks. A page with great content and mediocre CWV will outrank a page with perfect CWV and thin content. But among pages with comparable content quality, CWV can be the tiebreaker. The Next.js SEO audit checklist covers framework-specific performance considerations.
Your next step: see your site-wide Lighthouse data in context → Create free account
Related Topics in The Technical SEO Audit Guide
The Technical SEO Checklist for 2026
A practical technical SEO checklist covering crawlability, indexation, Core Web Vitals, structured data, JavaScript rendering, and AI search visibility — updated for 2026.
How to Find and Fix All Broken Links on Your Site
A practical guide to finding, prioritizing, and fixing broken links across your website to improve user experience and SEO performance.
The Complete Website Audit Checklist for Agencies (2026)
A 25-point website audit checklist built for agencies managing multiple client sites. Covers structure, content, performance, and reporting workflows.
How to Run a Bulk Lighthouse Test on Your Entire Site
Stop testing one page at a time. Run Lighthouse across your entire site to find the pages dragging down performance — and fix them systematically.
Next.js SEO Audit Checklist for 2026
An auditor's checklist for Next.js 14+ sites built on the App Router. Covers metadata, rendering strategies, dynamic routes, and the technical pitfalls that don't show up in generic SEO guides.
How to Find Noindex Pages Blocking Your Rankings
Accidental noindex tags silently remove pages from Google. Here's how to find every noindex directive on your site — and tell the intentional ones from the mistakes.
Technical SEO Audit Guide for Headless Websites
Headless websites separate content from presentation, and that separation introduces SEO audit challenges that monolithic sites don't have. This guide covers the methodology for auditing any headless stack.
The Comprehensive Astro SEO Checklist
Astro ships fast HTML by default, but fast isn't the same as optimized. This checklist covers every SEO consideration specific to Astro 4.x+ — from Islands to View Transitions to content collections.
Automated SEO Monitoring: Set Up Daily Site Audits
One-off audits find problems after they've already cost you traffic. Continuous monitoring finds them as they happen. Here's how to set up daily automated SEO monitoring that catches regressions before rankings suffer.
Shareable SEO Reports: How to Send Audits Clients Actually Read
Most SEO reports are PDFs that clients download, glance at, and forget. Shareable URL-based reports stay current, require no login, and get acted on. Here's why and how.
JavaScript Rendering Audit Checklist
A checklist for auditing JavaScript-rendered pages: crawl accessibility, metadata after render, lazy-loaded content, and the tools to verify what Google actually sees.
