PageSpeed Insights vs Lighthouse: What's the Difference?

Why your Lighthouse score differs from PageSpeed Insights. Understand lab vs field data, scoring differences, and when to use each tool.
Harlan WiltonHarlan Wilton6 min read Published

Both PageSpeed Insights and Lighthouse come from Google. Both give you a 0-100 performance score. Yet they return different numbers for the same page. Here's why, and which one to trust.

The two tools

Lighthouse

Open-source auditing tool. Runs locally in Chrome DevTools, CLI, or CI. Lab data only — controlled, reproducible synthetic tests on your machine. You choose device emulation, throttling mode, and network conditions. No Google account needed.

npx lighthouse https://example.com --output=json

Currently on version 13, which removed the PWA category.

PageSpeed Insights

Google's web tool at pagespeed.web.dev. Runs Lighthouse on Google's servers from data centers in Oregon, South Carolina, Netherlands, or Taiwan. Lab + field data — overlays real-user metrics from CrUX (28-day P75 rolling window). Paste a URL and go.

Also runs Lighthouse 13. Provides an API with 25,000 free requests/day.

https://example.com
Elements
Console
Sources
Network
Performance
Lighthouse
Application
92
Performance
First Contentful Paint
1.2 s
Largest Contentful Paint
2.1 s
Total Blocking Time
120 ms
Cumulative Layout Shift
0.05
Speed Index
2.8 s

Key differences

Lighthouse (CLI/DevTools)PageSpeed InsightsEdge
Data typeLab onlyLab + field (CrUX) PSI
Where it runsYour machineGoogle servers Lighthouse
INPNot measured (TBT proxy)Real field data from CrUX PSI
ThrottlingSimulated or applied (your choice)Simulated only Lighthouse
CachingCold or warm (configurable)Always cold cache Lighthouse
NetworkYour connection (throttled)Google's network
Score consistencyHigh (controlled)Varies (shared infra) Lighthouse
API accessLocal only25k free requests/day PSI
Site-wideOne URL at a timeOne URL at a time

Why your scores differ

A 10-15 point gap between PSI and local Lighthouse is normal. Score variance of ±5 points happens even across identical PSI runs. Here's what causes larger gaps:

Server location

PSI routes your test to whichever Google data center is closest to your IP — not closest to your server. As research on PSI server locations shows, if your site lacks a CDN, a test routed from Taiwan to a US-only origin adds hundreds of milliseconds to TTFB, inflating LCP.

Local Lighthouse connects directly from your machine, which might be on the same network as your server or using a nearby CDN edge.

Throttling method

PSI uses simulated throttling exclusively — it runs the page at full speed then mathematically adjusts the metrics. Lighthouse CLI lets you choose between simulated and applied (real network) throttling. Applied throttling produces lower scores but more realistic results.

Hardware differences

Google's servers have different CPU/RAM profiles than your laptop. Simulated throttling tries to normalize this, but CPU-heavy pages (lots of JavaScript parsing) can score differently depending on the underlying hardware.

Caching state

PSI always runs a cold-cache test. Your local Lighthouse might inadvertently benefit from cached service worker responses or browser cache, especially if you've visited the page recently.

Extensions and background processes

Chrome extensions inject scripts and CSS. Local Lighthouse in DevTools runs in a separate profile, but other browser activity still affects CPU availability. PSI has none of these contamination issues.

Lab data vs field data

This is the most important distinction between the two tools.

Lab DataLighthouse
Your Machine
Lighthouse
Synthetic Score
1 page load
Field DataCrUX / PSI
Real Chrome Users
CrUX Dataset
PSI Field Metrics
1000+ visits

Lab data (Lighthouse) tests a single page load under controlled conditions. It's reproducible, fast, and catches regressions — but it represents one synthetic user, not your actual audience.

Field data (CrUX in PSI) aggregates real Chrome users visiting your site over a 28-day rolling window. It uses the 75th percentile (P75), catching the experience of users with slow devices and poor connections — not just the median.

Lab (Lighthouse)Field (CrUX/PSI)
SourceSimulated page loadReal Chrome users
Sample1 test runThousands of visits (P75)
Time windowPoint-in-time28-day rolling average
Traffic requirementNone~1,000 page loads in 28 days
INPNot availableYes
ConsistencyHighVaries with real conditions

A site can score 95 in Lighthouse but show "Poor" LCP in CrUX field data. This happens when your real users are on slow mobile networks that lab throttling doesn't fully simulate.

How Lighthouse scores performance

Both tools use the same scoring weights. Understanding them helps explain why the same page gets different numbers:

TBTTotal Blocking Time
30% weight
Good ≤200msPoor >600ms
LCPLargest Contentful Paint CWV
25% weight
Good ≤2.5sPoor >4.0s
CLSCumulative Layout Shift CWV
25% weight
Good ≤0.10Poor >0.25
FCPFirst Contentful Paint
10% weight
Good ≤1.8sPoor >3.0s
SISpeed Index
10% weight
Good ≤3.4sPoor >5.8s

TBT dominates at 30%. Since PSI and local Lighthouse run on different hardware with different CPU profiles, TBT-heavy pages see the biggest score gaps between the two tools.

INP: the biggest gap between PSI and Lighthouse

Interaction to Next Paint (INP) replaced First Input Delay as a Core Web Vital in March 2024. It measures responsiveness across the entire page lifecycle — every tap, click, and keypress.

Lighthouse cannot measure INP. It uses Total Blocking Time (TBT) as a lab proxy, but TBT only measures main-thread blockage during initial load. A page can achieve 0ms TBT and still fail INP in the field because JavaScript freezes the page when users interact after load. Read more about the INP vs TBT gap.

PSI shows real INP field data from CrUX. This is the single biggest reason to check PSI even if you primarily use Lighthouse CLI — it's the only place you'll see actual interaction responsiveness without adding your own Real User Monitoring.

When to use which

ScenarioUse
DevelopmentLighthouse CLI — fast feedback, consistent scores, CI integration
Production monitoringPSI / CrUX — real user data, INP, field metrics
CI/CD checksLighthouse CI — automated regression detection
Debugging score dropsLighthouse CLI — isolate variables, test specific throttling
Checking INPPSI — only source of real INP data without RUM
Site-wide auditsUnlighthouse — crawl every page, spot patterns across routes
Stakeholder reportsPSI — field data carries more weight, reflects real users

How Unlighthouse helps

Both Lighthouse and PSI test one URL at a time. Performance issues rarely affect just one page — a slow font, a heavy third-party script, or a layout shift pattern can be site-wide.

Unlighthouse crawls your entire site and runs Lighthouse on every discovered page. Instead of spot-checking your homepage, you catch the /checkout page with a 3.2s LCP or the /blog layout causing CLS on 40 posts.

npx unlighthouse --site https://your-site.com

Combine with PSI for field data on your critical pages, and Lighthouse CI for automated regression checks in your pipeline. Each tool fills a different gap.

Try Unlighthouse

FAQ

Why is my PageSpeed Insights score different from Lighthouse?

PSI runs Lighthouse on Google's servers with different hardware, network conditions, and server locations than your local machine. PSI also overlays CrUX field data that can affect the reported metrics. A gap of ±5-15 points between the two is normal.

Which score should I trust — PSI or Lighthouse?

For production sites with enough traffic: trust PSI's field data. It reflects what real users experience. For development and debugging: trust local Lighthouse, where you control variables and get consistent results.

Does PageSpeed Insights use Lighthouse?

Yes. PSI runs Lighthouse on Google's servers and adds CrUX field data on top. The lab scores in PSI come from the same Lighthouse engine you run locally.

Why does my score change every time I run PageSpeed Insights?

PSI runs on shared Google infrastructure where network conditions, server load, and routing vary between runs. Score variance of ±5 points is normal. Run 3-5 tests and use the median for accurate comparisons.

What is field data in PageSpeed Insights?

Field data comes from the Chrome User Experience Report (CrUX) — real metrics from Chrome users who visited your site over the past 28 days, reported at the 75th percentile. A URL needs roughly 1,000 page loads in 28 days before field data appears.

Should I optimize for Lighthouse or PageSpeed Insights?

Optimize for real users. Use Lighthouse to diagnose and fix lab issues during development. Use PSI field data to verify improvements reach your actual audience. Don't chase a perfect 100 — diminishing returns hit hard above 90.

Can I get a 100 on PageSpeed Insights?

In lab data, yes. But field data may still show issues because real users have diverse devices, network conditions, and geographic locations that lab tests can't replicate. A 100 lab score with "Poor" field CWV means your real users are still suffering.