Server response time is the leading cause of poor LCP. Sites with poor LCP have an average TTFB of 2,270ms—consuming nearly all of the 2.5s budget before a single byte reaches the browser.
Time to First Byte (TTFB) measures the duration between the browser requesting a page and receiving the first byte of the response. Lighthouse flags this as "Reduce initial server response time" when your document takes longer than 100ms to respond. The real-world thresholds are: Good under 200ms, Needs Improvement between 200-600ms, and Poor above 600ms.
Every millisecond of server delay directly adds to your LCP. If your server takes 2 seconds to respond, you've already consumed 80% of your LCP budget before the browser even begins parsing HTML. The cascade effect is brutal: no HTML means no CSS discovery, no CSS means no render, and no render means no LCP.
The problem compounds across geographic distances. A user in Singapore requesting a page from a Virginia server experiences 250ms+ of network latency each way. Add database queries, server-side rendering, and cold starts, and you're looking at TTFB measurements that dwarf everything else in your performance waterfall.
curl -w "TTFB: %{time_starttransfer}s\n" -o /dev/null -s https://your-site.com
for i in {1..5}; do curl -w "%{time_starttransfer}\n" -o /dev/null -s https://your-site.com; done
curl -w "DNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTTFB: %{time_starttransfer}s\n" -o /dev/null -s https://your-site.com
Lighthouse reports "Reduce initial server response time" under Diagnostics when the root document exceeds the 100ms target. The audit shows the actual response time and calculates potential savings toward FCP and LCP metrics.
A CDN eliminates geographic latency by serving cached content from servers close to your users. This single change typically reduces TTFB by 100-500ms for global audiences.
/*
Cache-Control: public, max-age=3600, s-maxage=3600
// Vercel edge caching via headers
export const config = {
runtime: 'edge',
}
export default function handler(req) {
return new Response(html, {
headers: {
'Cache-Control': 'public, s-maxage=3600, stale-while-revalidate=86400',
},
})
}
For dynamic content, use stale-while-revalidate to serve cached content immediately while refreshing in the background:
Cache-Control: public, max-age=60, stale-while-revalidate=3600
Slow database queries are the hidden TTFB killer. A single unindexed query can add 500ms+ to every page load.
-- Find slow queries in PostgreSQL
SELECT query, calls, mean_time, total_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- Add indexes for common patterns
CREATE INDEX idx_posts_user_published
ON posts(user_id, published_at)
WHERE status = 'published';
// Use connection pooling - critical for serverless
import { Pool } from 'pg'
const pool = new Pool({
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
})
When you can't cache HTML, move your compute closer to users with edge functions.
// Cloudflare Workers - global edge deployment
export default {
async fetch(request, env) {
const data = await env.KV.get('page-data', 'json')
return new Response(renderPage(data), {
headers: { 'Content-Type': 'text/html' },
})
}
}
// Vercel Edge Functions
export const config = { runtime: 'edge' }
export default async function handler(req) {
const data = await fetch('https://api.example.com/data')
return new Response(renderTemplate(await data.json()))
}
TTFB reduction creates a direct 1:1 improvement in LCP because the browser cannot begin any rendering work until HTML arrives. Reducing TTFB from 600ms to 200ms gives the browser an extra 400ms to download CSS, discover images, and paint the LCP element. CDNs solve the physics problem (speed of light latency), while caching solves the compute problem (server processing time). Together, they address the two largest contributors to slow server response.
generateStaticParams for static generation. Add ISR with revalidate for dynamic content that changes infrequently.export const revalidate = 3600 // ISR every hour
routeRules for per-route caching strategies. Use prerenderRoutes for static paths. Deploy to Cloudflare or Vercel for edge rendering. Enable experimental.componentIslands for partial hydration.export default defineNuxtConfig({
routeRules: {
'/blog/**': { swr: 3600 },
'/': { prerender: true },
},
})
After implementing changes:
Expected improvement: Reducing TTFB from 2000ms to 400ms should improve LCP by approximately 1600ms, often the difference between failing and passing Core Web Vitals.
Over-caching personalized content: Caching user-specific pages at the CDN edge causes users to see each other's data. Use Cache-Control: private for authenticated pages, or implement edge-side personalization with cookies.
Ignoring cold starts: Serverless functions can add 500ms+ on cold start. Monitor cold start frequency separately from average TTFB. Use provisioned concurrency or edge functions with faster cold starts.
Missing cache invalidation: Aggressive caching without proper invalidation serves stale content. Implement cache tags or surrogate keys for targeted purging when content updates.
TTFB problems often compound with other LCP issues:
TTFB varies dramatically across pages—your homepage may be fast while product pages with database queries are slow. Run a comprehensive scan to identify which routes have server response problems before they impact real users.
Scan Your Site with Unlighthouse