Core Web Vitals are Google's metrics for measuring user experience. They became an official Google ranking factor in June 2021 and directly affect both your search visibility and user satisfaction.
As of 2024, only 48% of mobile sites pass all Core Web Vitals, meaning optimizing these metrics gives you a competitive advantage over half the web.
Core Web Vitals are three specific metrics that measure loading performance, visual stability, and interactivity:
| Metric | Full Name | Measures | Good | Poor |
|---|---|---|---|---|
| LCP | Largest Contentful Paint | Loading speed | ≤2.5s | >4.0s |
| CLS | Cumulative Layout Shift | Visual stability | ≤0.1 | >0.25 |
| INP | Interaction to Next Paint | Responsiveness | ≤200ms | >500ms |
These thresholds are measured at the 75th percentile of page loads—meaning 75% of your visitors must experience "good" scores for your site to pass.
Important: INP replaced FID as a Core Web Vital on March 12, 2024. If you're still optimizing for First Input Delay, you're targeting an outdated metric.
LCP measures how long it takes for the largest visible content element to render. It's the hardest Core Web Vital to pass—only 59% of mobile pages achieve good LCP.
| LCP Time | Rating | What It Means |
|---|---|---|
| ≤2.5s | Good | Users perceive the page as fast |
| 2.5-4.0s | Needs Improvement | Noticeable delay, some users may bounce |
| >4.0s | Poor | Significant user frustration, high abandonment |
The LCP element is the largest image or text block visible in the viewport:
| Issue | Impact | Prevalence |
|---|---|---|
| Slow server response (TTFB) | High | Sites with poor LCP average 2,270ms TTFB |
| Render-blocking resources | High | CSS and JS that delay first paint |
| Large unoptimized images | High | Uncompressed or oversized images |
| Client-side rendering delays | Medium | JavaScript-heavy frameworks |
| LCP image not prioritized | Medium | Missing fetchpriority or preload |
→ Complete LCP Guide | All LCP Fixes
CLS measures unexpected layout shifts during the page lifecycle. It's the easiest Core Web Vital to pass—79% of mobile sites achieve good CLS.
| CLS Score | Rating | What It Means |
|---|---|---|
| ≤0.1 | Good | Stable layout, minimal shifts |
| 0.1-0.25 | Needs Improvement | Noticeable movement, may frustrate users |
| >0.25 | Poor | Significant layout instability |
CLS is calculated as: Impact Fraction × Distance Fraction. A shift affecting 50% of the viewport that moves 25% of the viewport height = 0.5 × 0.25 = 0.125 CLS.
| Cause | Impact | Prevalence |
|---|---|---|
| Images without dimensions | High | 66% of mobile pages have unsized images |
| Ads and embeds | High | Third-party content without reserved space |
| Dynamically injected content | Medium | Banners, notifications, modals |
| Web fonts causing FOIT/FOUT | Medium | Font swapping changes text size |
| CSS animations | Low | Animations that trigger layout |
→ Complete CLS Guide | All CLS Fixes
INP measures how quickly your page responds to user interactions. Unlike FID (which only measured the first interaction), INP tracks all interactions because 90% of user time on a page is spent after initial load.
| INP Time | Rating | What It Means |
|---|---|---|
| ≤200ms | Good | Interactions feel instant |
| 200-500ms | Needs Improvement | Noticeable lag on interactions |
| >500ms | Poor | Sluggish, frustrating experience |
INP measures the full interaction lifecycle:
The reported INP is typically the worst interaction (or 98th percentile for pages with many interactions).
| Issue | Impact | Lab Proxy |
|---|---|---|
| Long-running JavaScript | High | Total Blocking Time (TBT) |
| Heavy event handlers | High | Main thread work |
| Large DOM size | Medium | DOM nodes >1,400 |
| Third-party scripts | Medium | Blocking main thread |
| Hydration delays | Medium | Framework-specific |
While 93% of sites passed FID, only 74% pass INP. The desktop vs mobile gap is even more stark: 97% desktop vs 74% mobile pass INP.
→ Complete INP Guide | All INP Fixes
Beyond the three Core Web Vitals, several other metrics affect your Lighthouse Performance score and user experience.
FCP measures when the first text or image is painted. It's the earliest signal that the page is loading.
| FCP Time | Rating |
|---|---|
| ≤1.8s | Good |
| 1.8-3.0s | Needs Improvement |
| >3.0s | Poor |
Weight in Lighthouse: 10% of Performance score
FCP issues often share root causes with LCP—fix render-blocking resources and server response time to improve both.
Speed Index measures how quickly content is visually populated during page load. It captures the perceived loading experience.
| Speed Index | Rating |
|---|---|
| ≤3.4s | Good |
| 3.4-5.8s | Needs Improvement |
| >5.8s | Poor |
Weight in Lighthouse: 10% of Performance score
Speed Index improves when above-the-fold content loads progressively rather than all at once.
TTFB measures server responsiveness—the time from request to first byte of response. It's not a Core Web Vital but directly impacts LCP.
| TTFB | Rating |
|---|---|
| ≤200ms | Good |
| 200-600ms | Needs Improvement |
| >600ms | Poor |
Sites with poor LCP have an average TTFB of 2,270ms. Fix TTFB first if your server response exceeds 600ms.
TBT measures the total time the main thread was blocked by long tasks (>50ms) between FCP and Time to Interactive. It's a lab proxy for INP.
| TBT | Rating |
|---|---|
| ≤200ms | Good |
| 200-600ms | Needs Improvement |
| >600ms | Poor |
Weight in Lighthouse: 30% of Performance score—the highest-weighted metric
TBT correlates twice as well with INP as FID did, making it the best lab metric for predicting real-world interactivity.
The overall Lighthouse Performance score is a weighted average of five metrics:
| Metric | Weight | Primary Improvement |
|---|---|---|
| TBT | 30% | Reduce JavaScript, break up long tasks |
| LCP | 25% | Optimize images, improve TTFB |
| CLS | 25% | Set dimensions, reserve space |
| FCP | 10% | Remove render-blocking resources |
| SI | 10% | Progressive loading, critical CSS |
Key insight: LCP, CLS, and TBT account for 80% of your score. Focus there first.
Lab tools test in controlled environments—useful for debugging but not used by Google for ranking.
Tools:
npx unlighthouse --site https://your-site.com
Field data from actual visitors is what Google uses for ranking:
Important: Expect a 28-day delay between fixes and improved CrUX data.
| Aspect | Lab Data | Field Data |
|---|---|---|
| Device | Simulated throttling | Real user devices |
| Network | Fixed conditions | Variable connections |
| Interactions | Scripted or none | Real user behavior |
| Use for | Debugging, CI/CD | SEO ranking, real UX |
Your lab scores may differ significantly from field data. A page can score 95 in Lighthouse but fail Core Web Vitals in the field if real users have slower devices or networks.
Core Web Vitals are a confirmed Google ranking factor.
You must pass all three metrics to gain ranking advantage—there's no partial credit.
The Deloitte "Milliseconds Make Millions" study found a 0.1 second improvement in site speed can:
Google research shows bounce rate increases 32% when load time goes from 1 to 3 seconds, and 90% at 5 seconds.
According to HTTP Archive 2024 data:
| Metric | Mobile Pass Rate | Desktop Pass Rate |
|---|---|---|
| LCP | 59% | 72% |
| CLS | 79% | 72% |
| INP | 74% | 97% |
| All Three | 48% | 54% |
The trend is improving: mobile CWV pass rates grew from 31% (2022) to 48% (2024). But over half the web still fails.
Most tools only test one page. Your homepage might score 100, but what about:
The 2024 Web Almanac specifically notes that homepage performance is often not representative of an entire site.
npx unlighthouse --site https://your-site.com