Bulk Lighthouse Testing Guide | Unlighthouse
Run Lighthouse audits on every page of your site, not just the homepage.
The Problem
Standard Lighthouse tools test one page at a time. But your site has hundreds—maybe thousands—of pages, and the 2024 Web Almanac specifically added secondary page analysis because homepage performance is often not representative of the entire site.
Your homepage might score 100. What about:
- Product pages with heavy images?
- Blog posts with embedded videos?
- Category pages with infinite scroll?
- User-generated content pages?
Each page type can have dramatically different performance characteristics.
The Solution: Unlighthouse
Unlighthouse crawls your site and runs Lighthouse on every page automatically.
npx unlighthouse --site https://your-site.com
How It Works
- Crawl — Discovers all pages from your sitemap and internal links
- Audit — Runs Lighthouse on each page in parallel
- Report — Interactive report sorted by worst scores
Installation
No installation required. Use npx:
npx unlighthouse --site https://your-site.com
Or install globally:
npm install -g unlighthouse
unlighthouse --site https://your-site.com
Configuration
Create unlighthouse.config.ts:
export default {
site: 'https://your-site.com',
scanner: {
// Crawl settings
maxRoutes: 200,
samples: 1,
},
lighthouse: {
// Lighthouse settings
throttling: {
cpuSlowdownMultiplier: 4,
},
},
}
Common Options
| Option | Description | Default |
|---|---|---|
--site | URL to scan | Required |
--urls | Specific URLs to test | All discovered |
--samples | Runs per URL | 1 |
--throttle | Simulate slow connection | true |
Filtering Pages
Test specific sections:
npx unlighthouse --site https://your-site.com --urls "/blog/**"
npx unlighthouse --site https://your-site.com --exclude "/admin/**"
Reading Results
The report shows:
- Score distribution — How many pages are good/needs work/poor
- Worst pages — Sorted by score, fix these first
- Issue breakdown — Common problems across your site
Understanding Lab vs Field Data
Critical distinction: Google only uses field data (CrUX) for search ranking, not lab scores from Lighthouse.
| Aspect | Lab Data (Lighthouse) | Field Data (CrUX) |
|---|---|---|
| Source | Simulated tests | Real user visits |
| Used for ranking | No | Yes |
| Best for | Debugging issues | Measuring actual UX |
| Update frequency | Immediate | 28-day rolling average |
Lab testing with Unlighthouse helps you find and fix issues. Then monitor field data to verify improvements reached real users.
Lighthouse Scoring Methodology
Lighthouse performance scoring weights metrics differently:
| Metric | Weight |
|---|---|
| Total Blocking Time (TBT) | 30% |
| Largest Contentful Paint (LCP) | 25% |
| Cumulative Layout Shift (CLS) | 25% |
| First Contentful Paint (FCP) | 10% |
| Speed Index | 10% |
Scores are derived from real website performance data on HTTP Archive:
- 0-49: Poor (red)
- 50-89: Needs Improvement (orange)
- 90-100: Good (green)
Reducing Score Variability
Lighthouse scores can vary between runs. Google's guidance:
- Run multiple times — The median of 5 runs is twice as stable as 1 run
- Use consistent hardware — Don't run concurrent tests on the same machine
- Isolate from third-parties — External scripts add variance
- Scale horizontally — Run on multiple smaller instances rather than one large one
Unlighthouse defaults to running multiple samples and averaging for more stable results.
CI/CD Integration
Run on every deploy to catch regressions:
name: Lighthouse Audit
on: [push]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Unlighthouse
run: npx unlighthouse-ci --site ${{ env.SITE_URL }}
Set performance budgets to fail builds when scores drop:
// unlighthouse.config.ts
export default {
site: 'https://your-site.com',
ci: {
budget: {
performance: 80,
accessibility: 90,
},
},
}
For CI/CD automation with performance budgets, see the Lighthouse CI guide or platform-specific guides for GitHub Actions and GitLab CI.
Alternatives Comparison
How Unlighthouse compares to other tools:
| Tool | Bulk Testing | Pricing | Notes |
|---|---|---|---|
| Unlighthouse | Unlimited pages | Free (CLI) | Open source, self-hosted |
| PageSpeed Insights | 1 page/request | Free | 25,000 requests/day API limit |
| DebugBear | 10,000/mo | $99/mo | Managed service |
| Calibre | 5 sites | $150/mo | Managed service |
| SpeedCurve | Varies | $20-500/mo | RUM + synthetic |
Current Web Performance Stats
Context for your scores from HTTP Archive 2024:
| Metric | Mobile Pass Rate | Desktop Pass Rate |
|---|---|---|
| Core Web Vitals (all 3) | 48% | 54% |
| LCP | 59% | 72% |
| CLS | 79% | 72% |
| INP | 74% | 97% |
If you're beating these numbers, you're ahead of most of the web.
Best Practices
- Test representative pages — Don't just test the homepage
- Run multiple samples — Average results for stability
- Test on realistic hardware — Use CPU throttling to simulate mobile
- Monitor trends — One-time audits aren't enough
- Fix worst pages first — Greatest impact for least effort
- Validate with field data — Lab scores don't affect ranking
Next Steps
- Run your first scan:
npx unlighthouse --site https://your-site.com - Fix the worst pages first
- Set up CI/CD to catch regressions
Related
- Core Web Vitals Guide
- Lighthouse CI — Automate audits in CI/CD pipelines
- LCP Finder | CLS Debugger | INP Analyzer
- Fix LCP
- Fix CLS
- Fix INP
Core Web Vitals Guide
Master Core Web Vitals - Google's page experience metrics. Learn LCP, CLS, INP thresholds, how to measure them, and fixes that improve rankings and conversions.
Accessibility
Master web accessibility with Lighthouse. Learn how accessibility audits work, why they matter for users and SEO, and how to fix common issues.
- The Problem
- The Solution: Unlighthouse
- How It Works
- Installation
- Configuration
- Common Options
- Filtering Pages
- Reading Results
- Understanding Lab vs Field Data
- Lighthouse Scoring Methodology
- Reducing Score Variability
- CI/CD Integration
- Alternatives Comparison
- Current Web Performance Stats
- Best Practices
- Next Steps
- Related