Playwright Lighthouse in GitHub Actions
Run Lighthouse audits automatically on every pull request. This guide covers GitHub Actions setup with Playwright-based Lighthouse integration.
Basic Workflow (Script)
Create .github/workflows/lighthouse.yml:
name: Lighthouse Audit
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install chromium --with-deps
- name: Run Lighthouse audit
run: node scripts/lighthouse-audit.js
- name: Upload report
uses: actions/upload-artifact@v4
if: always()
with:
name: lighthouse-report
path: lighthouse-report.html
retention-days: 14
Create scripts/lighthouse-audit.js:
import { writeFileSync } from 'node:fs'
import lighthouse from 'lighthouse'
import { chromium } from 'playwright'
const PORT = 9222
const URL = process.env.AUDIT_URL || 'https://example.com'
const THRESHOLDS = {
'performance': 80,
'accessibility': 90,
'best-practices': 80,
'seo': 80,
}
async function audit() {
const browser = await chromium.launch({
args: [`--remote-debugging-port=${PORT}`],
})
const page = await browser.newPage()
await page.goto(URL, { waitUntil: 'networkidle' })
const result = await lighthouse(URL, {
port: PORT,
output: 'html',
logLevel: 'error',
})
writeFileSync('lighthouse-report.html', result.report)
const { categories } = result.lhr
const scores = {
performance: Math.round(categories.performance.score * 100),
accessibility: Math.round(categories.accessibility.score * 100),
bestPractices: Math.round(categories['best-practices'].score * 100),
seo: Math.round(categories.seo.score * 100),
}
// Save scores for PR comment
writeFileSync('lighthouse-results.json', JSON.stringify(scores, null, 2))
let failed = false
for (const [key, threshold] of Object.entries(THRESHOLDS)) {
const score = scores[key === 'best-practices' ? 'bestPractices' : key]
const status = score >= threshold ? '✅' : '❌'
console.log(`${status} ${key}: ${score} (threshold: ${threshold})`)
if (score < threshold) failed = true
}
await browser.close()
if (failed) {
console.error('\nLighthouse audit failed to meet thresholds')
process.exit(1)
}
}
audit()
Testing Preview Deployments
For Vercel, Netlify, or Cloudflare Pages preview URLs (using the script approach):
# ... (same as above)
- name: Run Lighthouse
run: node scripts/lighthouse-audit.js
env:
AUDIT_URL: ${{ github.event.deployment_status.target_url }}
Testing Preview Deployments (Playwright Test)
If you are using the playwright.config.ts setup, override the baseURL:
- name: Run Playwright Lighthouse
run: npx playwright test
env:
PLAYWRIGHT_TEST_BASE_URL: ${{ github.event.deployment_status.target_url }}
Testing Multiple URLs
Testing Multiple URLs
Audit several pages in one workflow:
// scripts/lighthouse-audit.js
const URLS = [
'https://example.com/',
'https://example.com/pricing',
'https://example.com/docs',
]
async function auditAll() {
const browser = await chromium.launch({
args: [`--remote-debugging-port=${PORT}`],
})
const results = []
for (const url of URLS) {
const page = await browser.newPage()
await page.goto(url, { waitUntil: 'networkidle' })
const result = await lighthouse(url, {
port: PORT,
logLevel: 'error',
})
results.push({
url,
performance: Math.round(result.lhr.categories.performance.score * 100),
accessibility: Math.round(result.lhr.categories.accessibility.score * 100),
})
await page.close()
}
await browser.close()
console.table(results)
// Fail if any page is below threshold
const failed = results.some(r => r.performance < 80)
if (failed)
process.exit(1)
}
With Authentication
For protected pages, use saved auth state:
- name: Run Lighthouse on authenticated pages
run: node scripts/lighthouse-auth-audit.js
env:
TEST_USER_EMAIL: ${{ secrets.TEST_USER_EMAIL }}
TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}
See Authentication Guide for the full script.
Reducing Variance
Lighthouse scores vary between runs. Run multiple times and use median:
async function auditWithMedian(url, runs = 3) {
const scores = []
for (let i = 0; i < runs; i++) {
const result = await lighthouse(url, { port: PORT, logLevel: 'error' })
scores.push(result.lhr.categories.performance.score * 100)
}
scores.sort((a, b) => a - b)
return scores[Math.floor(scores.length / 2)] // Median
}
PR Comments with Results
Post Lighthouse scores as a PR comment:
- name: Comment PR
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs')
const results = JSON.parse(fs.readFileSync('lighthouse-results.json'))
const body = `## Lighthouse Results
| Metric | Score |
|--------|-------|
| Performance | ${results.performance} |
| Accessibility | ${results.accessibility} |
| Best Practices | ${results.bestPractices} |
| SEO | ${results.seo} |`
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body
})
Caching Playwright Browsers
Speed up workflows by caching Chromium:
- name: Cache Playwright browsers
uses: actions/cache@v4
with:
path: ~/.cache/ms-playwright
key: playwright-${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
- name: Install Playwright (use cache)
run: npx playwright install chromium --with-deps
Full Production Workflow
Use Playwright's webServer configuration to handle starting/stopping your app automatically.
1. Configure playwright.config.ts:
import { defineConfig } from '@playwright/test'
export default defineConfig({
webServer: {
command: 'npm run preview',
port: 4173,
timeout: 120 * 1000,
reuseExistingServer: !process.env.CI,
},
})
2. Update GitHub Actions Workflow:
name: Lighthouse CI
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
- run: npm ci
- run: npx playwright install chromium --with-deps
- run: npm run build
# Playwright starts the server automatically!
- name: Run Lighthouse
run: npx playwright test
Scaling with Sharding
Lighthouse audits are slow. For large sites, run tests in parallel across multiple machines using Playwright Sharding.
jobs:
lighthouse:
strategy:
fail-fast: false
matrix:
shardIndex: [1, 2, 3, 4]
shardTotal: [4]
runs-on: ubuntu-latest
steps:
# ... setup steps ...
- name: Run Lighthouse (Shard ${{ matrix.shardIndex }}/${{ matrix.shardTotal }})
run: npx playwright test --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }}
- name: Upload Blob Report
if: always()
uses: actions/upload-artifact@v4
with:
name: blob-report-${{ matrix.shardIndex }}
path: blob-report
retention-days: 1
You can then merge these reports in a separate job using npx playwright merge-reports.
When to Use Lighthouse CI Instead
For advanced CI features, Lighthouse CI provides:
- Historical tracking across builds
- GitHub status checks (not just comments)
- Baseline comparisons
- Built-in assertion presets
The Playwright approach works well for simpler setups or when you need custom control over the browser context (like authentication).
Related
- Lighthouse CI with GitHub Actions — Dedicated CI tooling
- Authentication — Test protected pages
- Troubleshooting — Fix CI-specific issues
Authentication
Run Lighthouse audits on pages behind login. Learn how to preserve authentication state when Lighthouse opens a new browser context.
Troubleshooting
Fix common issues when running Lighthouse with Playwright: port conflicts, authentication problems, flaky scores, and Chrome version mismatches.