Bulk Lighthouse Testing Guide | Unlighthouse

How to run Lighthouse audits across your entire website. Test hundreds of pages automatically.
Harlan WiltonHarlan Wilton5 min read Published

Run Lighthouse audits on every page of your site, not just the homepage.

The Problem

Standard Lighthouse tools test one page at a time. But your site has hundreds—maybe thousands—of pages, and the 2024 Web Almanac specifically added secondary page analysis because homepage performance is often not representative of the entire site.

Your homepage might score 100. What about:

  • Product pages with heavy images?
  • Blog posts with embedded videos?
  • Category pages with infinite scroll?
  • User-generated content pages?

Each page type can have dramatically different performance characteristics.

The Solution: Unlighthouse

Unlighthouse crawls your site and runs Lighthouse on every page automatically.

npx unlighthouse --site https://your-site.com

How It Works

  1. Crawl — Discovers all pages from your sitemap and internal links
  2. Audit — Runs Lighthouse on each page in parallel
  3. Report — Interactive report sorted by worst scores

Installation

No installation required. Use npx:

npx unlighthouse --site https://your-site.com

Or install globally:

npm install -g unlighthouse
unlighthouse --site https://your-site.com

Configuration

Create unlighthouse.config.ts:

export default {
  site: 'https://your-site.com',
  scanner: {
    // Crawl settings
    maxRoutes: 200,
    samples: 1,
  },
  lighthouse: {
    // Lighthouse settings
    throttling: {
      cpuSlowdownMultiplier: 4,
    },
  },
}

Common Options

OptionDescriptionDefault
--siteURL to scanRequired
--urlsSpecific URLs to testAll discovered
--samplesRuns per URL1
--throttleSimulate slow connectiontrue

Filtering Pages

Test specific sections:

npx unlighthouse --site https://your-site.com --urls "/blog/**"

npx unlighthouse --site https://your-site.com --exclude "/admin/**"

Reading Results

The report shows:

  • Score distribution — How many pages are good/needs work/poor
  • Worst pages — Sorted by score, fix these first
  • Issue breakdown — Common problems across your site

Understanding Lab vs Field Data

Critical distinction: Google only uses field data (CrUX) for search ranking, not lab scores from Lighthouse.

AspectLab Data (Lighthouse)Field Data (CrUX)
SourceSimulated testsReal user visits
Used for rankingNoYes
Best forDebugging issuesMeasuring actual UX
Update frequencyImmediate28-day rolling average

Lab testing with Unlighthouse helps you find and fix issues. Then monitor field data to verify improvements reached real users.

Lighthouse Scoring Methodology

Lighthouse performance scoring weights metrics differently:

MetricWeight
Total Blocking Time (TBT)30%
Largest Contentful Paint (LCP)25%
Cumulative Layout Shift (CLS)25%
First Contentful Paint (FCP)10%
Speed Index10%

Scores are derived from real website performance data on HTTP Archive:

  • 0-49: Poor (red)
  • 50-89: Needs Improvement (orange)
  • 90-100: Good (green)

Reducing Score Variability

Lighthouse scores can vary between runs. Google's guidance:

  • Run multiple times — The median of 5 runs is twice as stable as 1 run
  • Use consistent hardware — Don't run concurrent tests on the same machine
  • Isolate from third-parties — External scripts add variance
  • Scale horizontally — Run on multiple smaller instances rather than one large one

Unlighthouse defaults to running multiple samples and averaging for more stable results.

CI/CD Integration

Run on every deploy to catch regressions:

name: Lighthouse Audit
on: [push]

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Unlighthouse
        run: npx unlighthouse-ci --site ${{ env.SITE_URL }}

Set performance budgets to fail builds when scores drop:

// unlighthouse.config.ts
export default {
  site: 'https://your-site.com',
  ci: {
    budget: {
      performance: 80,
      accessibility: 90,
    },
  },
}

Full CI/CD guide

CLI vs Cloud

FeatureCLICloud
Bulk scanningYesYes
Local setupRequiredNot required
Historical dataNoYes
Scheduled scansNoYes
Team sharingNoYes
Alerts on regressionNoYes

Start with CLI. When you need history, scheduling, and team features, upgrade to Cloud.

Alternatives Comparison

How Unlighthouse compares to other tools:

ToolBulk TestingPricingNotes
UnlighthouseUnlimited pagesFree (CLI)Open source, self-hosted
PageSpeed Insights1 page/requestFree25,000 requests/day API limit
DebugBear10,000/mo$99/moManaged service
Calibre5 sites$150/moManaged service
SpeedCurveVaries$20-500/moRUM + synthetic

Current Web Performance Stats

Context for your scores from HTTP Archive 2024:

MetricMobile Pass RateDesktop Pass Rate
Core Web Vitals (all 3)48%54%
LCP59%72%
CLS79%72%
INP74%97%

If you're beating these numbers, you're ahead of most of the web.

Best Practices

  1. Test representative pages — Don't just test the homepage
  2. Run multiple samples — Average results for stability
  3. Test on realistic hardware — Use CPU throttling to simulate mobile
  4. Monitor trends — One-time audits aren't enough
  5. Fix worst pages first — Greatest impact for least effort
  6. Validate with field data — Lab scores don't affect ranking

Next Steps

  1. Run your first scan: npx unlighthouse --site https://your-site.com
  2. Fix the worst pages first
  3. Set up CI/CD to catch regressions
  4. Try Cloud for ongoing monitoring