Lighthouse with Playwright: Performance Testing Guide

Run Google Lighthouse audits with Playwright for automated performance, accessibility, and SEO testing. Complete setup guide with code examples.
Harlan WiltonHarlan Wilton Published

Run Lighthouse audits programmatically using Playwright. This guide covers direct integration without wrapper libraries — giving you full control over the audit process.

Requirements: Node.js 22+, Playwright installed. Lighthouse connects via Chrome DevTools Protocol on a remote debugging port.

What You'll Build

By the end of this guide, you'll have:

  • Lighthouse running inside your Playwright test suite
  • Performance, accessibility, SEO, and best practices scores
  • HTML reports generated automatically
  • A foundation for CI/CD integration

How It Works

Playwright launches Chromium with a remote debugging port. Lighthouse connects to that port and runs its audits on the page Playwright has navigated to.

Playwright → launches Chrome with --remote-debugging-port=9222
Lighthouse → connects to port 9222 → runs audits → returns results

This is the same approach the playwright-lighthouse npm package uses internally, but doing it directly gives you more control and fewer dependencies.

Setup

Install the required packages:

npm install -D playwright lighthouse
npx playwright install chromium

Basic Integration

Here's a complete working example:

import { writeFileSync } from 'node:fs'
import lighthouse from 'lighthouse'
import { chromium } from 'playwright'

const PORT = 9222

async function runLighthouseAudit(url) {
  // Launch Chrome with remote debugging enabled
  const browser = await chromium.launch({
    args: [`--remote-debugging-port=${PORT}`],
  })

  const page = await browser.newPage()
  await page.goto(url, { waitUntil: 'networkidle' })

  // Run Lighthouse audit
  const result = await lighthouse(url, {
    port: PORT,
    output: 'html',
    logLevel: 'error',
  })

  // Save HTML report
  writeFileSync('lighthouse-report.html', result.report)

  // Extract scores
  const { categories } = result.lhr
  console.log('Performance:', Math.round(categories.performance.score * 100))
  console.log('Accessibility:', Math.round(categories.accessibility.score * 100))
  console.log('Best Practices:', Math.round(categories['best-practices'].score * 100))
  console.log('SEO:', Math.round(categories.seo.score * 100))

  await browser.close()
  return result.lhr
}

runLighthouseAudit('https://example.com')

Using with Playwright Test

Integrate into your existing Playwright test suite:

import { chromium, test } from '@playwright/test'
import lighthouse from 'lighthouse'

const PORT = 9222

test.describe('Lighthouse Audits', () => {
  test('homepage meets performance threshold', async () => {
    const browser = await chromium.launch({
      args: [`--remote-debugging-port=${PORT}`],
    })

    const page = await browser.newPage()
    await page.goto('https://example.com', { waitUntil: 'networkidle' })

    const result = await lighthouse('https://example.com', {
      port: PORT,
      logLevel: 'error',
    })

    const perfScore = result.lhr.categories.performance.score * 100

    await browser.close()

    // Assert minimum performance score
    expect(perfScore).toBeGreaterThanOrEqual(80)
  })
})
Important: Run Lighthouse tests with a single worker (--workers=1). Multiple concurrent Lighthouse audits on the same debugging port will conflict.

Advanced: Custom Test Fixture

For a cleaner, reusable pattern, extend Playwright's test object to create a custom Fixture. This abstracts the setup/teardown logic.

// fixtures/lighthouse.ts
import { writeFileSync } from 'node:fs'
import { test as base } from '@playwright/test'
import lighthouse from 'lighthouse'
import { chromium } from 'playwright'

export const test = base.extend({
  lighthouse: async ({}, use, testInfo) => {
    const port = 9222
    const browser = await chromium.launch({
      args: [`--remote-debugging-port=${port}`],
    })

    await use(async (url) => {
      const page = await browser.newPage()
      await page.goto(url, { waitUntil: 'networkidle' })

      const result = await lighthouse(url, { port, output: 'html', logLevel: 'error' })
      const score = result.lhr.categories.performance.score * 100

      // Attach report to Playwright test results
      const reportPath = `lighthouse-report-${testInfo.title.replace(/\s+/g, '-')}.html`
      writeFileSync(reportPath, result.report)
      await testInfo.attach('lighthouse-report', { path: reportPath, contentType: 'text/html' })

      await page.close()
      return score
    })

    await browser.close()
  }
})

export { expect } from '@playwright/test'

Usage in tests:

import { expect, test } from './fixtures/lighthouse'

test('homepage core web vitals', async ({ lighthouse }) => {
  const score = await lighthouse('https://example.com')
  expect(score).toBeGreaterThanOrEqual(90)
})

Soft Assertions

Use expect.soft to prevent a single metric failure from terminating the entire test execution. This is useful when auditing multiple pages or metrics.

const { categories } = result.lhr

// Test continues even if performance fails
expect.soft(categories.performance.score).toBeGreaterThan(0.9)
expect.soft(categories.accessibility.score).toBeGreaterThan(0.9)
expect.soft(categories.seo.score).toBeGreaterThan(0.9)

Configuration Options

Customize the audit with Lighthouse flags:

const result = await lighthouse(url, {
  port: PORT,
  output: ['html', 'json'], // Multiple output formats
  logLevel: 'error',
  onlyCategories: ['performance', 'accessibility'], // Skip SEO/best-practices
  formFactor: 'desktop', // 'mobile' (default) or 'desktop'
  throttling: {
    cpuSlowdownMultiplier: 1, // Disable CPU throttling
  },
  screenEmulation: {
    disabled: true, // Use actual viewport
  },
})

Common Configuration Presets

Mobile (default):

{
  formFactor: 'mobile',
  screenEmulation: { mobile: true, width: 412, height: 823 },
  throttling: { cpuSlowdownMultiplier: 4 }
}

Desktop:

{
  formFactor: 'desktop',
  screenEmulation: { mobile: false, width: 1350, height: 940 },
  throttling: { cpuSlowdownMultiplier: 1 }
}

Extracting Specific Metrics

Access individual Core Web Vitals and other metrics:

const { audits } = result.lhr

// Core Web Vitals
const lcp = audits['largest-contentful-paint'].numericValue // ms
const cls = audits['cumulative-layout-shift'].numericValue // score
const tbt = audits['total-blocking-time'].numericValue // ms (proxy for INP)

// Other useful metrics
const fcp = audits['first-contentful-paint'].numericValue
const tti = audits.interactive.numericValue
const speedIndex = audits['speed-index'].numericValue

console.log(`LCP: ${lcp}ms, CLS: ${cls}, TBT: ${tbt}ms`)

When to Use Direct Integration vs Alternatives

ApproachBest ForTrade-offs
Direct integration (this guide)Full control, minimal dependenciesMore setup code
playwright-lighthouse packageQuick setupThin wrapper, less maintained
UnlighthouseSite-wide auditsAutomatic crawling, no per-page control
Lighthouse CICI/CD pipelinesBuilt for automation, historical tracking

For testing individual pages in your test suite, direct integration works well. For auditing an entire site, Unlighthouse crawls automatically and tests every page:

npx unlighthouse --site https://example.com

Common Issues

Running into problems? See Troubleshooting for solutions to:

  • Port conflicts and "address already in use" errors
  • Authentication state not persisting
  • Flaky or inconsistent scores
  • Chrome/Chromium version mismatches

Next Steps

Authenticated Pages

Run Lighthouse on pages behind login.

CI/CD Integration

Automate audits in GitHub Actions.

Troubleshooting

Fix common integration issues.