JavaScript Rendering Audit Checklist
A checklist for auditing JavaScript-rendered pages: crawl accessibility, metadata after render, lazy-loaded content, and the tools to verify what Google actually sees.
JavaScript rendering audit checklist
JavaScript rendering is not an SEO problem. JavaScript rendering that fails silently is an SEO problem. The distinction matters because "JavaScript is bad for SEO" is a myth that leads teams to avoid modern frameworks entirely, while "JavaScript rendering just works" is a myth that leads teams to ship client-rendered pages without verifying that search engines can actually see the content.
The reality is mechanical: Google renders JavaScript using a headless Chromium instance. It can see most JavaScript-generated content. But the rendering is asynchronous, resource-constrained, and subject to failure modes that don't exist for server-rendered HTML. This checklist covers what to verify, how to verify it, and what to fix when things break.
If you've already read the SSR vs CSR vs ISR rendering guide, you understand the rendering strategy choices. This checklist is the operational follow-through — the specific checks to run on pages that depend on JavaScript for content rendering.
For the broader technical SEO context, the Technical SEO Audit guide covers the full audit methodology.
Prerequisites
- Access to the site's rendered pages (both with and without JavaScript)
- Chrome DevTools or a Chromium-based browser
- Google Search Console access (for URL Inspection tool)
- An Evergreen account for site-wide crawl comparison (free tier: 500 pages)
- Optional: Puppeteer or Playwright for automated rendering checks
The checklist
1. Compare initial HTML vs rendered DOM
What to check: Fetch the page's raw HTML response (before JavaScript executes) and compare it to the fully-rendered DOM (after JavaScript executes). Any content that appears only after JavaScript execution depends on Google's rendering pipeline — which is asynchronous and not guaranteed.
How to check:
# Fetch raw HTML (no JavaScript execution)
curl -s https://example.com/page | grep -i "<title>\|<meta name=\"description\"\|<h1>"
Then open the same URL in Chrome, wait for the page to fully load, and check the rendered DOM via DevTools (Elements panel). Compare the title tag, meta description, H1, and primary body content.
What you're looking for: Content that exists in the rendered DOM but not in the initial HTML. This content depends on client-side JavaScript. If it's critical for SEO (title, description, H1, primary content), it should be moved to the server-rendered response.
The Evergreen shortcut: The content audit table captures the initial HTML response for every crawled page. If the audit shows missing titles or empty H1s on pages that look fine in a browser, you've found a rendering dependency.
2. Check metadata in the initial HTML response
What to check: Title tags, meta descriptions, canonical tags, Open Graph tags, and structured data (JSON-LD) should all be present in the initial HTML response, not injected by JavaScript.
Why this matters: Google processes metadata from the initial HTML during the crawl phase. Metadata injected by JavaScript is only visible during the render phase, which happens later and sometimes not at all. Pages that rely on JavaScript to inject their <title> tag may appear in search results with missing or incorrect titles until (or unless) Google renders them.
Common framework-specific issues:
- React (client-side rendered):
react-helmetorreact-helmet-asyncinjects metadata client-side. Google may or may not render it. Use server-side rendering for metadata. - Next.js App Router:
generateMetadataruns on the server — metadata is in the initial HTML. ButuseEffect-based metadata updates are client-side only. - Vue/Nuxt:
useHeadwith@unhead/vuegenerates server-side metadata in SSR mode. In SPA mode, metadata is client-side. - Astro: Metadata is static HTML by default. No rendering dependency unless you're using client-side islands for metadata (which you shouldn't).
The test: Disable JavaScript in your browser (chrome://settings/content/javascript) and reload the page. If the title bar shows "React App" or an empty title, the metadata is JavaScript-dependent. The Next.js SEO audit checklist covers framework-specific metadata patterns.
3. Verify structured data renders in initial HTML
What to check: JSON-LD structured data should be embedded in a <script type="application/ld+json"> tag in the initial HTML response, not generated by JavaScript at runtime.
How to check: View the page source (not the rendered DOM — the actual source). Search for application/ld+json. If it's present, structured data is server-rendered. If it's absent in the source but present in the rendered DOM, it's JavaScript-dependent.
Verify with Google: Use the Rich Results Test to validate structured data. This tool renders JavaScript, so it will show you what Google sees after rendering — but compare the results with the source HTML to know if there's a rendering dependency.
4. Audit lazy-loaded content
What to check: Content that loads when the user scrolls (lazy loading, infinite scroll, "load more" buttons) is not visible to Googlebot unless it's in the initial HTML or triggered automatically without user interaction.
The critical distinction:
- Image lazy loading (
loading="lazy") is supported by Googlebot and is fine for SEO. Google's crawler triggers lazy-loaded images. - Content lazy loading (loading additional text, sections, or pages on scroll or click) is not triggered by Googlebot. Content behind "Read more" buttons, collapsed accordion panels, tabbed interfaces, or infinite scroll is invisible to crawlers unless the HTML is present in the initial response.
What to check for:
- Infinite scroll: Does the initial HTML contain only the first batch of items? If so, items beyond the first batch are invisible to search engines. Implement paginated archive pages (
/blog/page/2/) as a fallback for infinite scroll. - Accordion content: Is the text inside collapsed panels present in the HTML (just visually hidden) or dynamically loaded when expanded? Hidden-but-present is fine. Dynamically loaded is not.
- Tab navigation: Same test as accordions. Content in non-active tabs should be in the HTML, just not displayed.
- "Load more" buttons: Content behind these is invisible to Googlebot. Provide a paginated alternative.
5. Check for client-side routing and navigation issues
What to check: Single-page applications (SPAs) that use client-side routing (React Router, Vue Router, Next.js client navigation) change the URL in the browser without making a new server request. This can create issues if the initial HTML response for each route is different from what the client-side router renders.
The test: Copy a URL from inside the SPA and paste it into a new browser window (or use curl). Does the server return the correct page content for that URL? Or does it return a generic shell that the client-side router then interprets?
If every URL returns the same HTML shell and relies on client-side JavaScript to render the correct page content, every page depends on JavaScript rendering for search engines.
The fix: Ensure your server returns route-specific HTML for every URL. This is the default behavior in SSR frameworks (Next.js, Nuxt, Remix). In pure client-side SPAs, it requires either SSR/SSG or pre-rendering specific routes.
6. Check for JavaScript errors that block rendering
What to check: JavaScript errors during page load can prevent content from rendering. Google's renderer abandons rendering on fatal errors, which means the page content never appears.
How to check:
- Open Chrome DevTools Console panel
- Load the page
- Note any red error messages
- Test with throttled network (DevTools → Network → Slow 3G) to simulate Google's rendering environment
Common error categories:
- Failed API calls: If page content depends on an API that's rate-limited, geo-restricted, or requires authentication, the API call will fail during Google's render — and the content won't appear. Use server-side data fetching for content that should be indexed.
- Third-party script failures: A failing analytics script or chat widget can crash the JavaScript execution context, preventing all subsequent scripts from running — including the ones that render your content. Isolate third-party scripts from content-rendering code.
- Environment-dependent code: Code that checks
window.localStorage,document.cookie, or user-agent strings may behave differently in Google's headless Chromium than in a desktop browser. Test your rendering in a headless browser (Puppeteer) to simulate Google's environment.
7. Verify internal links are crawlable
What to check: Internal links generated by JavaScript (React <Link> components, Vue <router-link>, etc.) must render as standard <a href="..."> tags in the HTML. Links that use JavaScript click handlers (onclick="navigate()") or programmatic navigation (router.push()) without accompanying <a> tags are invisible to crawlers.
How to check: View page source (not the rendered DOM) and search for the URLs of pages you expect to be linked. If the <a href> tags are present in the source, links are crawlable. If they only appear after JavaScript execution, they depend on Google's rendering.
Framework-specific notes:
- Next.js
<Link>: Renders as a standard<a>tag in SSR. Crawlable by default. - React Router
<Link>: In SSR mode, renders as<a>. In CSR-only mode, only exists after JavaScript execution. - Astro
<a>: Standard HTML. No JavaScript dependency.
The technical SEO checklist covers internal link auditing in broader detail, and the find noindex pages guide covers related indexation issues.
8. Audit resource loading and render-blocking
What to check: JavaScript files required for content rendering must load successfully and not be blocked by robots.txt, Content Security Policy, or network errors.
How to check:
- Verify your
robots.txtdoesn't block JavaScript or CSS files:User-agent: * Disallow: /js/would block Google from rendering your pages. - Check the page's network requests in DevTools. If JavaScript bundles are loaded from different domains (CDN, third-party), those domains must also not be blocked by robots.txt.
- Verify Content Security Policy headers don't prevent script execution in headless browser contexts.
The Google Search Console check: The URL Inspection tool shows a rendered screenshot and lists any resources that couldn't be loaded. If critical JavaScript files are listed as blocked, Google can't render your page.
9. Test rendering speed and timeouts
What to check: Google's renderer has a timeout — pages that take too long to render may be indexed with incomplete content. While Google hasn't published the exact timeout, testing suggests it's approximately 5–10 seconds for the initial render.
How to check: Use Puppeteer or Playwright to measure time from page load to complete content rendering:
const start = Date.now()
await page.goto("https://example.com/page", {
waitUntil: "networkidle0",
})
const renderTime = Date.now() - start
console.log(`Render time: ${renderTime}ms`)
Pages that take longer than 5 seconds to render their primary content are at risk of incomplete indexation. The bulk Lighthouse testing guide covers site-wide performance measurement.
Interpreting the results
After running through the checklist, your pages fall into one of four categories:
No rendering dependency. All content, metadata, and structured data are in the initial HTML. No JavaScript execution required. This is the ideal state. No action needed.
Non-critical rendering dependency. Interactive features, analytics, or UI enhancements depend on JavaScript, but primary content and metadata are in the initial HTML. This is acceptable and normal. No SEO action needed.
Critical rendering dependency (fixable). Primary content or metadata depends on JavaScript rendering, but the fix is straightforward — move metadata generation to the server, convert client-side data fetching to server-side, or add pre-rendering. Prioritize pages with organic traffic.
Architectural rendering dependency. The entire site is a client-side SPA with no server rendering. Fixing individual pages isn't sufficient — the rendering architecture needs to change. This is a significant engineering effort but necessary if SEO is a traffic goal. The SSR vs CSR vs ISR guide covers the architectural options.
How Evergreen catches rendering issues at scale
Running this checklist manually is feasible for individual pages. Running it across a 500-page site requires tooling.
In Evergreen, the crawler captures the initial HTML response — the same response Googlebot receives in the crawl phase. The content audit table shows metadata, H1 tags, internal links, and content attributes based on that initial HTML. If a page shows "missing title" in the audit table but displays a correct title in the browser, the title depends on JavaScript rendering.
This comparison — audit data vs browser reality — is the fastest way to identify rendering dependencies across an entire site. Filter the audit table to pages with missing metadata, cross-reference with pages that have organic traffic in your GA4 and Search Console data, and you have a prioritized list of rendering issues to fix.
Audit JS-rendered pages at scale. Start free →
Related resources
- Technical SEO Audit: the complete guide — the parent pillar for technical audit methodology
- SSR vs CSR vs ISR: how rendering impacts SEO — rendering strategy choices and their SEO implications
- Next.js SEO audit checklist — framework-specific rendering audit guidance
- Bulk Lighthouse testing — site-wide performance measurement
- Technical SEO checklist — the broader technical audit reference
Related Topics in The Technical SEO Audit Guide
The Technical SEO Checklist for 2026
A practical technical SEO checklist covering crawlability, indexation, Core Web Vitals, structured data, JavaScript rendering, and AI search visibility — updated for 2026.
How to Find and Fix All Broken Links on Your Site
A practical guide to finding, prioritizing, and fixing broken links across your website to improve user experience and SEO performance.
The Complete Website Audit Checklist for Agencies (2026)
A 25-point website audit checklist built for agencies managing multiple client sites. Covers structure, content, performance, and reporting workflows.
How to Run a Bulk Lighthouse Test on Your Entire Site
Stop testing one page at a time. Run Lighthouse across your entire site to find the pages dragging down performance — and fix them systematically.
Next.js SEO Audit Checklist for 2026
An auditor's checklist for Next.js 14+ sites built on the App Router. Covers metadata, rendering strategies, dynamic routes, and the technical pitfalls that don't show up in generic SEO guides.
How to Find Noindex Pages Blocking Your Rankings
Accidental noindex tags silently remove pages from Google. Here's how to find every noindex directive on your site — and tell the intentional ones from the mistakes.
Technical SEO Audit Guide for Headless Websites
Headless websites separate content from presentation, and that separation introduces SEO audit challenges that monolithic sites don't have. This guide covers the methodology for auditing any headless stack.
The Comprehensive Astro SEO Checklist
Astro ships fast HTML by default, but fast isn't the same as optimized. This checklist covers every SEO consideration specific to Astro 4.x+ — from Islands to View Transitions to content collections.
Lighthouse Score for Your Entire Site: Tools and Methods
Lighthouse tests one page at a time. Here are five ways to get scores for every page on your site — from free CLI tools to SaaS dashboards — and when each approach makes sense.
Automated SEO Monitoring: Set Up Daily Site Audits
One-off audits find problems after they've already cost you traffic. Continuous monitoring finds them as they happen. Here's how to set up daily automated SEO monitoring that catches regressions before rankings suffer.
Shareable SEO Reports: How to Send Audits Clients Actually Read
Most SEO reports are PDFs that clients download, glance at, and forget. Shareable URL-based reports stay current, require no login, and get acted on. Here's why and how.
