The Complete Core Web Vitals Guide
Core Web Vitals are Google's metrics for measuring user experience. They became an official Google ranking factor in June 2021 and directly affect both your search visibility and user satisfaction.
As of 2024, only 48% of mobile sites pass all Core Web Vitals, meaning optimizing these metrics gives you a competitive advantage over half the web.
What Are Core Web Vitals?
Core Web Vitals are three specific metrics that measure loading performance, visual stability, and interactivity:
| Metric | Full Name | Measures | Good | Poor |
|---|---|---|---|---|
| LCP | Largest Contentful Paint | Loading speed | ≤2.5s | >4.0s |
| CLS | Cumulative Layout Shift | Visual stability | ≤0.1 | >0.25 |
| INP | Interaction to Next Paint | Responsiveness | ≤200ms | >500ms |
These thresholds are measured at the 75th percentile of page loads—meaning 75% of your visitors must experience "good" scores for your site to pass.
Important: INP replaced FID as a Core Web Vital on March 12, 2024. If you're still optimizing for First Input Delay, you're targeting an outdated metric.
Largest Contentful Paint (LCP)
LCP measures how long it takes for the largest visible content element to render. It's the hardest Core Web Vital to pass—only 59% of mobile pages achieve good LCP.
What is a Good LCP Score?
| LCP Time | Rating | What It Means |
|---|---|---|
| ≤2.5s | Good | Users perceive the page as fast |
| 2.5-4.0s | Needs Improvement | Noticeable delay, some users may bounce |
| >4.0s | Poor | Significant user frustration, high abandonment |
What Elements Can Be LCP?
The LCP element is the largest image or text block visible in the viewport:
- Images (including CSS background images)—73% of mobile pages have an image as their LCP element
- Video poster images (the thumbnail before playback)
- Block-level text elements (headings, paragraphs)
- SVG elements containing text or images
Common LCP Issues
| Issue | Impact | Prevalence |
|---|---|---|
| Slow server response (TTFB) | High | Sites with poor LCP average 2,270ms TTFB |
| Render-blocking resources | High | CSS and JS that delay first paint |
| Large unoptimized images | High | Uncompressed or oversized images |
| Client-side rendering delays | Medium | JavaScript-heavy frameworks |
| LCP image not prioritized | Medium | Missing fetchpriority or preload |
LCP Business Impact
- Vodafone improved LCP by 31% and saw 8% more sales
- NDTV halved their LCP and achieved 50% lower bounce rates
- Tokopedia reduced LCP by 55% and increased organic traffic by 23%
→ Complete LCP Guide | All LCP Fixes
Cumulative Layout Shift (CLS)
CLS measures unexpected layout shifts during the page lifecycle. It's the easiest Core Web Vital to pass—79% of mobile sites achieve good CLS.
What is a Good CLS Score?
| CLS Score | Rating | What It Means |
|---|---|---|
| ≤0.1 | Good | Stable layout, minimal shifts |
| 0.1-0.25 | Needs Improvement | Noticeable movement, may frustrate users |
| >0.25 | Poor | Significant layout instability |
CLS is calculated as: Impact Fraction × Distance Fraction. A shift affecting 50% of the viewport that moves 25% of the viewport height = 0.5 × 0.25 = 0.125 CLS.
What Causes Layout Shifts?
| Cause | Impact | Prevalence |
|---|---|---|
| Images without dimensions | High | 66% of mobile pages have unsized images |
| Ads and embeds | High | Third-party content without reserved space |
| Dynamically injected content | Medium | Banners, notifications, modals |
| Web fonts causing FOIT/FOUT | Medium | Font swapping changes text size |
| CSS animations | Low | Animations that trigger layout |
CLS Business Impact
- Redbus reduced CLS from 1.65 to 0 and saw 80-100% higher mobile conversion
- Yahoo! Japan fixed CLS and achieved 15% more page views per session
- AliExpress improved CLS and reduced bounce rate by 15%
→ Complete CLS Guide | All CLS Fixes
Interaction to Next Paint (INP)
INP measures how quickly your page responds to user interactions. Unlike FID (which only measured the first interaction), INP tracks all interactions because 90% of user time on a page is spent after initial load.
What is a Good INP Score?
| INP Time | Rating | What It Means |
|---|---|---|
| ≤200ms | Good | Interactions feel instant |
| 200-500ms | Needs Improvement | Noticeable lag on interactions |
| >500ms | Poor | Sluggish, frustrating experience |
How INP is Calculated
INP measures the full interaction lifecycle:
- Input delay — Time from user action to event handler start
- Processing time — Time to execute event handlers
- Presentation delay — Time to render the visual update
The reported INP is typically the worst interaction (or 98th percentile for pages with many interactions).
Common INP Issues
| Issue | Impact | Lab Proxy |
|---|---|---|
| Long-running JavaScript | High | Total Blocking Time (TBT) |
| Heavy event handlers | High | Main thread work |
| Large DOM size | Medium | DOM nodes >1,400 |
| Third-party scripts | Medium | Blocking main thread |
| Hydration delays | Medium | Framework-specific |
The INP Gap
While 93% of sites passed FID, only 74% pass INP. The desktop vs mobile gap is even more stark: 97% desktop vs 74% mobile pass INP.
INP Business Impact
- Trendyol reduced INP by 50% and saw 1% increase in click-through rate
- The Economic Times cut INP by 30% and achieved 50% lower bounce rate
- redBus improved INP by 75% alongside 7% increase in conversions
→ Complete INP Guide | All INP Fixes
Other Web Vitals
Beyond the three Core Web Vitals, several other metrics affect your Lighthouse Performance score and user experience.
First Contentful Paint (FCP)
FCP measures when the first text or image is painted. It's the earliest signal that the page is loading.
| FCP Time | Rating |
|---|---|
| ≤1.8s | Good |
| 1.8-3.0s | Needs Improvement |
| >3.0s | Poor |
Weight in Lighthouse: 10% of Performance score
FCP issues often share root causes with LCP—fix render-blocking resources and server response time to improve both.
Speed Index (SI)
Speed Index measures how quickly content is visually populated during page load. It captures the perceived loading experience.
| Speed Index | Rating |
|---|---|
| ≤3.4s | Good |
| 3.4-5.8s | Needs Improvement |
| >5.8s | Poor |
Weight in Lighthouse: 10% of Performance score
Speed Index improves when above-the-fold content loads progressively rather than all at once.
Time to First Byte (TTFB)
TTFB measures server responsiveness—the time from request to first byte of response. It's not a Core Web Vital but directly impacts LCP.
| TTFB | Rating |
|---|---|
| ≤200ms | Good |
| 200-600ms | Needs Improvement |
| >600ms | Poor |
Sites with poor LCP have an average TTFB of 2,270ms. Fix TTFB first if your server response exceeds 600ms.
Total Blocking Time (TBT)
TBT measures the total time the main thread was blocked by long tasks (>50ms) between FCP and Time to Interactive. It's a lab proxy for INP.
| TBT | Rating |
|---|---|
| ≤200ms | Good |
| 200-600ms | Needs Improvement |
| >600ms | Poor |
Weight in Lighthouse: 30% of Performance score—the highest-weighted metric
TBT correlates twice as well with INP as FID did, making it the best lab metric for predicting real-world interactivity.
Lighthouse Performance Score Weights
The overall Lighthouse Performance score is a weighted average of five metrics:
| Metric | Weight | Primary Improvement |
|---|---|---|
| TBT | 30% | Reduce JavaScript, break up long tasks |
| LCP | 25% | Optimize images, improve TTFB |
| CLS | 25% | Set dimensions, reserve space |
| FCP | 10% | Remove render-blocking resources |
| SI | 10% | Progressive loading, critical CSS |
Key insight: LCP, CLS, and TBT account for 80% of your score. Focus there first.
How to Measure Core Web Vitals
Lab Data (Synthetic Testing)
Lab tools test in controlled environments—useful for debugging but not used by Google for ranking.
Tools:
- Core Web Vitals Checker — Test any page with lab and field data
- LCP Finder — Identify your LCP element and optimization opportunities
- CLS Debugger — Find and fix layout shift sources
- INP Analyzer — Diagnose interaction responsiveness issues
- Chrome DevTools — Performance panel shows LCP, CLS, and long tasks
- Lighthouse — Full performance audit with actionable recommendations
- WebPageTest — Filmstrip view and waterfall analysis
For CI/CD integration, Lighthouse CI runs audits on every commit and fails builds when metrics regress.
npx unlighthouse --site https://your-site.com
Field Data (Real User Monitoring)
Field data from actual visitors is what Google uses for ranking:
- PageSpeed Insights — Shows CrUX data for your URL and origin
- Search Console — Core Web Vitals report with URL-level groupings
- Chrome UX Report (CrUX) — 28-day rolling average of real user data
Important: Expect a 28-day delay between fixes and improved CrUX data.
Lab vs Field Differences
| Aspect | Lab Data | Field Data |
|---|---|---|
| Device | Simulated throttling | Real user devices |
| Network | Fixed conditions | Variable connections |
| Interactions | Scripted or none | Real user behavior |
| Use for | Debugging, CI/CD | SEO ranking, real UX |
Your lab scores may differ significantly from field data. A page can score 95 in Lighthouse but fail Core Web Vitals in the field if real users have slower devices or networks.
Why Core Web Vitals Matter for SEO
Core Web Vitals are a confirmed Google ranking factor.
Ranking Studies
- Pages at position 1 are 10% more likely to pass CWV than position 9
- Sites failing Core Web Vitals rank 3.7 percentage points worse on average
- CWV act as a tie-breaker when content quality is similar
You must pass all three metrics to gain ranking advantage—there's no partial credit.
User Experience & Conversions
The Deloitte "Milliseconds Make Millions" study found a 0.1 second improvement in site speed can:
- Increase retail conversions by 8.4%
- Boost average order value by 9.2%
- Increase travel conversions by 10.1%
Google research shows bounce rate increases 32% when load time goes from 1 to 3 seconds, and 90% at 5 seconds.
Current State of the Web (2024)
According to HTTP Archive 2024 data:
| Metric | Mobile Pass Rate | Desktop Pass Rate |
|---|---|---|
| LCP | 59% | 72% |
| CLS | 79% | 72% |
| INP | 74% | 97% |
| All Three | 48% | 54% |
The trend is improving: mobile CWV pass rates grew from 31% (2022) to 48% (2024). But over half the web still fails.
Test Your Entire Site
Most tools only test one page. Your homepage might score 100, but what about:
- Product pages with heavy images?
- Blog posts with embedded videos?
- Dynamic pages with JavaScript widgets?
The 2024 Web Almanac specifically notes that homepage performance is often not representative of an entire site.
npx unlighthouse --site https://your-site.com
Next Steps
- Audit — Run Unlighthouse to find issues across all pages
- Prioritize — Fix the highest-impact issues first (TBT, LCP, CLS account for 80% of score)
- Automate — Set up Lighthouse CI to catch regressions before they reach production
- Monitor — Set up continuous monitoring to track real user metrics
Related
- Core Web Vitals Checker — Test any page
- Lighthouse CI Guide — Automate testing in CI/CD
- Bulk Lighthouse Testing — Test your entire site
- Largest Contentful Paint (LCP) Guide
- Cumulative Layout Shift (CLS) Guide
- Interaction to Next Paint (INP) Guide
- Lighthouse Score Calculator
- LCP Definition | CLS Definition | INP Definition