Lighthouse with Playwright: Performance Testing Guide

Run Google Lighthouse audits with Playwright for automated performance, accessibility, and SEO testing. Complete setup guide with code examples.
Harlan WiltonHarlan Wilton Published Updated
How do you run Lighthouse with Playwright? Launch Chromium with --remote-debugging-port=9222, navigate to the target URL, then pass that port to the lighthouse npm package. Lighthouse connects via CDP, runs audits, and returns scores plus a full HTML report you can attach to Playwright test results.

Run Lighthouse audits programmatically using Playwright. This guide covers direct integration without wrapper libraries - giving you full control over the audit process.

Requirements: Node.js 22+, Playwright installed. Lighthouse connects via Chrome DevTools Protocol on a remote debugging port.

What You'll Build

By the end of this guide, you'll have:

  • Lighthouse running inside your Playwright test suite
  • Performance, accessibility, SEO, and best practices scores
  • HTML reports generated automatically
  • A foundation for CI/CD integration

How It Works

Playwright launches Chromium with a remote debugging port. Lighthouse connects to that port and runs its audits on the page Playwright has navigated to.

Playwright → launches Chrome with --remote-debugging-port=9222
Lighthouse → connects to port 9222 → runs audits → returns results

This is the same approach the playwright-lighthouse npm package uses internally, but doing it directly gives you more control and fewer dependencies.

Setup

Install the required packages:

npm install -D playwright lighthouse
npx playwright install chromium

Basic Integration

Here's a complete working example:

import { writeFileSync } from 'node:fs'
import lighthouse from 'lighthouse'
import { chromium } from 'playwright'

const PORT = 9222

async function runLighthouseAudit(url) {
  // Launch Chrome with remote debugging enabled
  const browser = await chromium.launch({
    args: [`--remote-debugging-port=${PORT}`],
  })

  const page = await browser.newPage()
  await page.goto(url, { waitUntil: 'networkidle' })

  // Run Lighthouse audit
  const result = await lighthouse(url, {
    port: PORT,
    output: 'html',
    logLevel: 'error',
  })

  // Save HTML report
  writeFileSync('lighthouse-report.html', result.report)

  // Extract scores
  const { categories } = result.lhr
  console.log('Performance:', Math.round(categories.performance.score * 100))
  console.log('Accessibility:', Math.round(categories.accessibility.score * 100))
  console.log('Best Practices:', Math.round(categories['best-practices'].score * 100))
  console.log('SEO:', Math.round(categories.seo.score * 100))

  await browser.close()
  return result.lhr
}

runLighthouseAudit('https://example.com')

Using with Playwright Test

Integrate into your existing Playwright test suite:

import { chromium, test } from '@playwright/test'
import lighthouse from 'lighthouse'

const PORT = 9222

test.describe('Lighthouse Audits', () => {
  test('homepage meets performance threshold', async () => {
    const browser = await chromium.launch({
      args: [`--remote-debugging-port=${PORT}`],
    })

    const page = await browser.newPage()
    await page.goto('https://example.com', { waitUntil: 'networkidle' })

    const result = await lighthouse('https://example.com', {
      port: PORT,
      logLevel: 'error',
    })

    const perfScore = result.lhr.categories.performance.score * 100

    await browser.close()

    // Assert minimum performance score
    expect(perfScore).toBeGreaterThanOrEqual(80)
  })
})
Important: Run Lighthouse tests with a single worker (--workers=1). Multiple concurrent Lighthouse audits on the same debugging port will conflict.

Advanced: Custom Test Fixture

For a cleaner, reusable pattern, extend Playwright's test object to create a custom Fixture. This abstracts the setup/teardown logic.

// fixtures/lighthouse.ts
import { writeFileSync } from 'node:fs'
import { test as base } from '@playwright/test'
import lighthouse from 'lighthouse'
import { chromium } from 'playwright'

export const test = base.extend({
  lighthouse: async ({ page: _page }, use, testInfo) => {
    const port = 9222
    const browser = await chromium.launch({
      args: [`--remote-debugging-port=${port}`],
    })

    await use(async (url) => {
      const page = await browser.newPage()
      await page.goto(url, { waitUntil: 'networkidle' })

      const result = await lighthouse(url, { port, output: 'html', logLevel: 'error' })
      const score = result.lhr.categories.performance.score * 100

      // Attach report to Playwright test results
      const reportPath = `lighthouse-report-${testInfo.title.replace(/\s+/g, '-')}.html`
      writeFileSync(reportPath, result.report)
      await testInfo.attach('lighthouse-report', { path: reportPath, contentType: 'text/html' })

      await page.close()
      return score
    })

    await browser.close()
  }
})

export { expect } from '@playwright/test'

Usage in tests:

import { expect, test } from './fixtures/lighthouse'

test('homepage core web vitals', async ({ lighthouse }) => {
  const score = await lighthouse('https://example.com')
  expect(score).toBeGreaterThanOrEqual(90)
})

Soft Assertions

Use expect.soft to prevent a single metric failure from terminating the entire test execution. This is useful when auditing multiple pages or metrics.

const { categories } = result.lhr

// Test continues even if performance fails
expect.soft(categories.performance.score).toBeGreaterThan(0.9)
expect.soft(categories.accessibility.score).toBeGreaterThan(0.9)
expect.soft(categories.seo.score).toBeGreaterThan(0.9)

Configuration Options

Customize the audit with Lighthouse flags:

const result = await lighthouse(url, {
  port: PORT,
  output: ['html', 'json'], // Multiple output formats
  logLevel: 'error',
  onlyCategories: ['performance', 'accessibility'], // Skip SEO/best-practices
  formFactor: 'desktop', // 'mobile' (default) or 'desktop'
  throttling: {
    cpuSlowdownMultiplier: 1, // Disable CPU throttling
  },
  screenEmulation: {
    disabled: true, // Use actual viewport
  },
})

Common Configuration Presets

Mobile (default):

const mobileConfig = {
  formFactor: 'mobile',
  screenEmulation: { mobile: true, width: 412, height: 823 },
  throttling: { cpuSlowdownMultiplier: 4 }
}

Desktop:

const desktopConfig = {
  formFactor: 'desktop',
  screenEmulation: { mobile: false, width: 1350, height: 940 },
  throttling: { cpuSlowdownMultiplier: 1 }
}

Extracting Specific Metrics

Access individual Core Web Vitals and other metrics:

const { audits } = result.lhr

// Core Web Vitals
const lcp = audits['largest-contentful-paint'].numericValue // ms
const cls = audits['cumulative-layout-shift'].numericValue // score
const tbt = audits['total-blocking-time'].numericValue // ms (proxy for INP)

// Other useful metrics
const fcp = audits['first-contentful-paint'].numericValue
const tti = audits.interactive.numericValue
const speedIndex = audits['speed-index'].numericValue

console.log(`LCP: ${lcp}ms, CLS: ${cls}, TBT: ${tbt}ms`)

When to Use Direct Integration vs Alternatives

ApproachBest ForTrade-offs
Direct integration (this guide)Full control, minimal dependenciesMore setup code
playwright-lighthouse packageQuick setupThin wrapper, less maintained
UnlighthouseSite-wide auditsAutomatic crawling, no per-page control
Lighthouse CICI/CD pipelinesBuilt for automation, historical tracking

For testing individual pages in your test suite, direct integration works well. For auditing an entire site, Unlighthouse crawls automatically and tests every page:

npx unlighthouse --site https://example.com

Playwright vs Puppeteer for Lighthouse

Both tools drive Chromium via CDP, so either can host a Lighthouse audit. Playwright wins on cross-browser context isolation, typed fixtures, built-in test runner, and active maintenance. Puppeteer stays closer to Chrome internals but lacks a first-class test runner.

FeaturePlaywrightPuppeteer
Test runnerBuilt in (@playwright/test)Bring your own (Jest, Mocha)
Auth persistencestorageState APIManual cookie handling
Parallel workersNative, configurableManual orchestration
TypeScript typesFirst-classCommunity types
Maintenance cadenceMonthly releasesSlower cadence

Picking Playwright means your Lighthouse audits live alongside E2E tests, share fixtures, and run on the same CI workers:

// Playwright: Lighthouse shares the same fixture system as E2E tests
import { test } from './fixtures/lighthouse'

test('pricing page performance', async ({ lighthouse, page }) => {
  await page.goto('/pricing')
  const score = await lighthouse(page.url())
  expect(score).toBeGreaterThanOrEqual(85)
})

Running Multiple Pages in Parallel

Playwright Test's worker model runs multiple audits concurrently, provided each worker uses a unique debugging port. Sharing port 9222 across workers causes Target closed errors.

// playwright.config.ts
import { defineConfig } from '@playwright/test'

export default defineConfig({
  workers: 4, // 4 parallel Lighthouse audits
  fullyParallel: true,
})

Generate a per-worker port inside the fixture so audits never collide:

// fixtures/lighthouse.ts
import { test as base } from '@playwright/test'
import lighthouse from 'lighthouse'
import { chromium } from 'playwright'

export const test = base.extend({
  lighthouse: async (_, use, testInfo) => {
    // Each worker gets a unique port
    const port = 9222 + testInfo.workerIndex

    const browser = await chromium.launch({
      args: [`--remote-debugging-port=${port}`],
    })

    await use(async (url: string) => {
      const page = await browser.newPage()
      await page.goto(url, { waitUntil: 'networkidle' })
      const result = await lighthouse(url, { port, logLevel: 'error' })
      await page.close()
      return result.lhr
    })

    await browser.close()
  },
})

Drive the audits from a parameterised test to fan out across pages:

import { expect, test } from './fixtures/lighthouse'

const urls = [
  '/',
  '/pricing',
  '/docs',
  '/blog',
]

for (const url of urls) {
  test(`audit ${url}`, async ({ lighthouse }) => {
    const lhr = await lighthouse(`https://example.com${url}`)
    expect(lhr.categories.performance.score).toBeGreaterThan(0.8)
  })
}

Typed Results with TypeScript

The lighthouse package ships its own types. Import them to get autocomplete on categories, audits, and numeric values.

import type { Flags, Result } from 'lighthouse'
import lighthouse from 'lighthouse'

interface AuditScores {
  performance: number
  accessibility: number
  bestPractices: number
  seo: number
  lcp: number
  cls: number
  tbt: number
}

async function runTypedAudit(url: string, port: number): Promise<AuditScores> {
  const flags: Flags = {
    port,
    logLevel: 'error',
    output: 'json',
  }

  const runnerResult = await lighthouse(url, flags)
  if (!runnerResult)
    throw new Error('Lighthouse returned no result')

  const lhr: Result = runnerResult.lhr

  return {
    performance: Math.round((lhr.categories.performance.score ?? 0) * 100),
    accessibility: Math.round((lhr.categories.accessibility.score ?? 0) * 100),
    bestPractices: Math.round((lhr.categories['best-practices'].score ?? 0) * 100),
    seo: Math.round((lhr.categories.seo.score ?? 0) * 100),
    lcp: lhr.audits['largest-contentful-paint'].numericValue ?? 0,
    cls: lhr.audits['cumulative-layout-shift'].numericValue ?? 0,
    tbt: lhr.audits['total-blocking-time'].numericValue ?? 0,
  }
}

For custom config presets, type the config object with LH.Config:

import type { Config } from 'lighthouse'

const desktopConfig: Config = {
  extends: 'lighthouse:default',
  settings: {
    formFactor: 'desktop',
    screenEmulation: { mobile: false, width: 1350, height: 940, deviceScaleFactor: 1, disabled: false },
    throttling: { cpuSlowdownMultiplier: 1, rttMs: 40, throughputKbps: 10 * 1024 },
  },
}

2026 Update: Playwright 1.50 + Lighthouse 13

Playwright 1.50 (January 2026) bumps bundled Chromium to 133 and requires Node.js 22 minimum. Lighthouse 13 drops CommonJS, so ensure your test files use ESM ("type": "module" in package.json or .mjs extensions).

{
  "type": "module",
  "engines": {
    "node": ">=22.0.0"
  },
  "devDependencies": {
    "@playwright/test": "^1.50.0",
    "lighthouse": "^13.0.0",
    "playwright": "^1.50.0"
  }
}

Lighthouse 13 also renames the TBT field pathing for INP forecasting. Read INP directly via the new audit key:

// Lighthouse 13+: INP is a first-class audit
const inp = lhr.audits['interaction-to-next-paint']?.numericValue

// Fallback to TBT on older versions
const tbt = lhr.audits['total-blocking-time']?.numericValue
const responsivenessMs = inp ?? tbt
Playwright 1.50 ships test.step.skip() and stable trace.viewer URLs. If you upgrade from 1.40, regenerate your playwright.config.ts types with npx playwright install.

FAQ

Can playwright-lighthouse work with Playwright Test?

Yes. The playwright-lighthouse npm package wraps the same CDP pattern shown above, exposing a playAudit() helper. Pass your Playwright page and a port:

import { test } from '@playwright/test'
import { playAudit } from 'playwright-lighthouse'

test('audit with wrapper', async ({ page }) => {
  await page.goto('https://example.com')
  await playAudit({
    page,
    port: 9222,
    thresholds: { performance: 80, accessibility: 90 },
  })
})

The wrapper is thinner than the direct approach, so you lose control over fixtures and worker ports. For parallel suites, prefer the fixture pattern above.

Why use Playwright instead of puppeteer-lighthouse?

Playwright gives you a built-in test runner, storageState auth, sharding, and TypeScript-first APIs. Puppeteer-lighthouse works, but you write the test use yourself. If you already run Playwright for E2E, adding Lighthouse as a fixture reuses your existing worker configuration and CI pipeline.

// One config, both E2E and Lighthouse audits
export default defineConfig({
  projects: [
    { name: 'e2e', testMatch: /.*\.spec\.ts/ },
    { name: 'lighthouse', testMatch: /.*\.lh\.ts/, workers: 2 },
  ],
})

How do I export Lighthouse results in Playwright?

Use testInfo.attach() to bundle HTML and JSON reports into the Playwright HTML report. Attached files appear under each test in the trace viewer:

test('export lighthouse report', async ({ lighthouse }, testInfo) => {
  const result = await lighthouse('https://example.com')

  // Attach HTML report
  await testInfo.attach('lighthouse.html', {
    body: result.report,
    contentType: 'text/html',
  })

  // Attach JSON for downstream tooling
  await testInfo.attach('lighthouse.json', {
    body: JSON.stringify(result.lhr, null, 2),
    contentType: 'application/json',
  })
})

Open the JSON in the Lighthouse Report Viewer to inspect audits visually without regenerating.

Does this work with Playwright's WebKit or firefox projects?

No. Lighthouse requires Chrome DevTools Protocol, which only Chromium supports. Keep Lighthouse audits on a dedicated chromium project in playwright.config.ts, while E2E suites can still run across all three engines.

projects: [
  { name: 'chromium-lighthouse', use: { browserName: 'chromium' } },
  { name: 'webkit-e2e', use: { browserName: 'webkit' } },
  { name: 'firefox-e2e', use: { browserName: 'firefox' } },
]

Common Issues

Running into problems? See Troubleshooting for solutions to:

  • Port conflicts and "address already in use" errors
  • Authentication state not persisting
  • Flaky or inconsistent scores
  • Chrome/Chromium version mismatches

Next Steps

Authenticated Pages

Run Lighthouse on pages behind login.

CI/CD Integration

Automate audits in GitHub Actions.

Troubleshooting

Fix common integration issues.