LCP is the hardest Core Web Vital to pass. Only 59% of mobile pages achieve good LCP compared to 74% for INP and 72% for CLS.
Why? LCP depends on everything—server response, network, image optimization, CSS, JavaScript, and rendering. One bottleneck anywhere in the chain kills your score.
Largest Contentful Paint marks when the main content finishes loading. It answers the user's question: "Is this page actually showing me what I came for?"
The element considered "largest" can change as the page loads. The browser reports the final LCP candidate once the user interacts or the page finishes loading.
LCP Thresholds:
| Device | Good | Needs Improvement | Poor |
|---|---|---|---|
| Mobile | ≤2.5s | 2.5–4.0s | >4.0s |
| Desktop | ≤1.2s | 1.2–2.4s | >2.4s |
Score Weight: LCP accounts for 25% of your Lighthouse Performance score—the largest single metric weight.
What elements can be LCP:
<img> elements—73% of mobile pages have an image as LCP<video> elements (poster image or first frame)background-image via url()Users don't wait. When the main content takes too long, they assume the page is broken. The 2.5-second mobile threshold is based on research into user attention spans—after 3 seconds, users start losing patience and bouncing.
The visual impact is immediate: a blank or skeleton screen for 4+ seconds feels broken. A fully-rendered hero in under 2.5 seconds feels fast.
LCP is one of Google's Core Web Vitals ranking factors. Google uses field data (CrUX) from the 75th percentile—meaning 75% of your users need good LCP for it to count as "good" for ranking purposes.
Sites with good Core Web Vitals can appear in the "Top Stories" carousel and benefit from ranking tiebreakers. Google confirms this is a real but modest ranking factor.
Real-world case studies demonstrate significant revenue correlation:
| Company | LCP Improvement | Business Impact |
|---|---|---|
| Rakuten | Achieved good LCP | 61% higher conversion rate |
| Vodafone | 31% improvement | 8% more sales |
| Tokopedia | 55% (3.78s → 1.72s) | 23% better session duration |
| NDTV | 50% reduction | 50% better bounce rate |
| Agrofy | 70% improvement | 76% less load abandonment |
The pattern is consistent: faster LCP correlates with better engagement and conversion.
Google recommends breaking LCP into four sub-parts to pinpoint exactly where time is being wasted:
| Sub-Part | Target % of LCP | What It Measures |
|---|---|---|
| TTFB | ~40% (1.0s) | Time for server to return first byte |
| Resource Load Delay | <10% (0.25s) | Time before LCP resource starts downloading |
| Resource Load Duration | ~40% (1.0s) | Time to download the LCP resource |
| Element Render Delay | <10% (0.25s) | Time from download complete to painted |
The critical insight: Sites with poor LCP have an average TTFB of 2,270ms. That's nearly the entire 2.5s mobile budget consumed before the browser even starts downloading images.
If your TTFB is over 800ms, fix that first. Everything else is wasted effort until the server responds faster.
Each sub-part has different causes:
Measure first, then target the right sub-part.
Poor LCP stems from a short list of causes. Here are the most impactful:
| Issue | Impact | Difficulty | Fix Time |
|---|---|---|---|
| Slow Server Response (TTFB) | High | Medium | Hours–Days |
| Render-Blocking Resources | High | Medium | Hours |
| Redirects | High | Low | Minutes |
| Prioritize LCP Image | High | Low | Minutes |
| LCP Lazy-Loaded | High | Low | Minutes |
| Resource Load Delay | High | Medium | Hours |
| Client-Side Rendering | High | High | Hours–Days |
| Large Images | Medium | Low | Minutes |
| Lazy Loading Above Fold | Medium | Low | Minutes |
| Total Byte Weight | Medium | Medium | Hours |
| Unminified JavaScript | Medium | Low | Minutes |
| Unused JavaScript | Medium | High | Hours–Days |
→ Diagnose your specific issue
Lab tests give you controlled, reproducible results—but they represent synthetic conditions, not real users.
Chrome DevTools:
Lighthouse: Run an audit in DevTools or via CLI. Check the LCP metric and related audits under "Opportunities" and "Diagnostics."
WebPageTest: Detailed waterfall analysis showing exactly when LCP happens relative to other resources. Use for deep debugging.
Field data is what Google uses for ranking. It represents actual user experience.
PageSpeed Insights: Enter your URL to see CrUX (Chrome User Experience Report) data. This is the same data Google uses for ranking decisions.
Google Search Console: The Core Web Vitals report shows which URLs pass or fail, grouped by similar pages.
Web Vitals JavaScript Library:
import { onLCP } from 'web-vitals'
onLCP((metric) => {
console.log('LCP:', metric.value, 'ms')
console.log('Element:', metric.attribution.element)
console.log('URL:', metric.attribution.url)
})
Send this data to your analytics for real user monitoring.
Common: your lab score is fine but field data fails. This usually means:
Always prioritize field data over lab scores.
Framework-specific optimizations for LCP:
Checking LCP page-by-page misses the big picture. Your homepage might pass while 200 blog posts fail. Product pages might score well except for ones with large hero images.
Unlighthouse scans your entire site and surfaces LCP scores for every page. The CLI is free and runs locally. Cloud adds scheduled monitoring and historical tracking to catch regressions before users do.