Pages over 4MB take an average of 8+ seconds to load on mobile connections. Every megabyte you trim directly accelerates your LCP.
Lighthouse flags pages where total network payload exceeds 2,667KB (the 10th percentile threshold). The median website transfers around 4MB—and that's already too much for most users on mobile networks.
Large payloads hurt LCP in multiple ways. First, more bytes mean longer download times. On a 4G connection averaging 10Mbps, downloading 4MB takes at least 3.2 seconds—before any parsing or rendering begins. Second, large pages consume user bandwidth and data plans, causing throttling on metered connections. Third, memory pressure from large payloads can slow down parsing and rendering, especially on low-end devices.
Why it hurts LCP: Your LCP element can't paint until it loads. If that element competes with megabytes of other resources, it waits in line. Even after the LCP resource downloads, large payloads cause processing delays that postpone the actual paint.
Look for the audit "Avoid enormous network payloads" under Diagnostics. It shows:
Run a test and check:
Images typically account for 50%+ of page weight. Target: under 200KB per hero image.
<!-- Before: 2MB unoptimized PNG -->
<img src="hero.png" />
<!-- After: Responsive WebP with fallback -->
<picture>
<source srcset="hero-400.webp 400w, hero-800.webp 800w, hero-1200.webp 1200w"
sizes="(max-width: 600px) 100vw, 50vw"
type="image/webp" />
<img src="hero-800.jpg" alt="Hero" loading="eager" />
</picture>
Compress images with modern formats:
cwebp -q 80 hero.png -o hero.webp
npx sharp-cli --input "*.{jpg,png}" --output webp --webp
All text-based assets (HTML, CSS, JS, JSON, SVG) should use Brotli or gzip compression.
Nginx configuration:
brotli on;
brotli_types text/plain text/css application/javascript application/json image/svg+xml;
brotli_comp_level 6;
gzip on;
gzip_types text/plain text/css application/javascript application/json image/svg+xml;
gzip_min_length 256;
Verify compression in DevTools:
content-encoding: br or gzipCompression typically reduces text assets by 60-80%.
Audit your node_modules and remove packages you don't use.
npx cost-of-modules
npx webpack-bundle-analyzer stats.json
npx vite-bundle-visualizer
Replace heavy libraries with lighter alternatives:
moment.js (300KB) → date-fns (tree-shakeable) or dayjs (2KB)lodash (70KB) → native ES6 methods or lodash-es (tree-shakeable)axios (13KB) → native fetchOnly load what's needed for the initial viewport.
// Lazy load images
<img src="product.jpg" loading="lazy" />
// Dynamic import for heavy components
const HeavyChart = () => import('./HeavyChart.vue')
// Intersection Observer for custom lazy loading
const observer = new IntersectionObserver((entries) => {
entries.forEach((entry) => {
if (entry.isIntersecting) {
loadHeavyModule()
observer.disconnect()
}
})
})
observer.observe(document.querySelector('#chart-container'))
Proper caching prevents re-downloading unchanged assets.
location ~* \.(js|css|png|jpg|webp|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
location ~* \.html$ {
expires 5m;
add_header Cache-Control "public, must-revalidate";
}
Reducing total bytes directly reduces download time. On a typical 4G connection (10Mbps):
That's 2.4 seconds saved before any other optimization. The browser can start parsing, executing, and rendering sooner when there's less to download.
import Image from 'next/image'
// Automatic WebP/AVIF, responsive sizes, lazy loading
<Image src="/hero.jpg" width={1200} height={600} priority />
next.config.js:module.exports = {
compress: true, // gzip by default
}
@nuxt/image for automatic image optimization:<NuxtImg src="/hero.jpg" width="1200" format="webp" />
export default defineNuxtConfig({
nitro: {
compressPublicAssets: true,
}
})
After implementing:
Target thresholds:
Often appears alongside:
Page weight varies dramatically between templates—your homepage might be lean while product pages are bloated. Unlighthouse crawls your entire site and identifies the heaviest pages by total transfer size.