Fix Slow Server Response (TTFB) for Better LCP

Reduce server response time to improve LCP. Learn CDN setup, caching strategies, database optimization, and edge computing solutions.
Harlan WiltonHarlan Wilton8 min read Published

Server response time is the leading cause of poor LCP. Sites with poor LCP have an average TTFB of 2,270ms—consuming nearly all of the 2.5s budget before a single byte reaches the browser.

LCPLargest Contentful Paint CWV
25% weight
Good ≤2.5sPoor >4.0s

What's the Problem?

Time to First Byte (TTFB) measures the duration between the browser requesting a page and receiving the first byte of the response. Lighthouse flags this as "Reduce initial server response time" when your document takes longer than 100ms to respond. The real-world thresholds are: Good under 200ms, Needs Improvement between 200-600ms, and Poor above 600ms.

Every millisecond of server delay directly adds to your LCP. If your server takes 2 seconds to respond, you've already consumed 80% of your LCP budget before the browser even begins parsing HTML. The cascade effect is brutal: no HTML means no CSS discovery, no CSS means no render, and no render means no LCP.

The problem compounds across geographic distances. A user in Singapore requesting a page from a Virginia server experiences 250ms+ of network latency each way. Add database queries, server-side rendering, and cold starts, and you're looking at TTFB measurements that dwarf everything else in your performance waterfall.

How to Identify This Issue

Chrome DevTools

  1. Open DevTools (F12) and navigate to the Network tab
  2. Hard reload the page (Ctrl+Shift+R / Cmd+Shift+R)
  3. Click on the first HTML document request
  4. Examine the Timing breakdown—look for "Waiting for server response"
  5. Anything above 200ms indicates a problem; above 600ms is critical

Command Line

curl -w "TTFB: %{time_starttransfer}s\n" -o /dev/null -s https://your-site.com

for i in {1..5}; do curl -w "%{time_starttransfer}\n" -o /dev/null -s https://your-site.com; done

curl -w "DNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTTFB: %{time_starttransfer}s\n" -o /dev/null -s https://your-site.com

Lighthouse Indicators

Lighthouse reports "Reduce initial server response time" under Diagnostics when the root document exceeds the 100ms target. The audit shows the actual response time and calculates potential savings toward FCP and LCP metrics.

The Fix

Primary: Deploy a CDN with Edge Caching

A CDN eliminates geographic latency by serving cached content from servers close to your users. This single change typically reduces TTFB by 100-500ms for global audiences.

/*
  Cache-Control: public, max-age=3600, s-maxage=3600
// Vercel edge caching via headers
export const config = {
  runtime: 'edge',
}

export default function handler(req) {
  return new Response(html, {
    headers: {
      'Cache-Control': 'public, s-maxage=3600, stale-while-revalidate=86400',
    },
  })
}

For dynamic content, use stale-while-revalidate to serve cached content immediately while refreshing in the background:

Cache-Control: public, max-age=60, stale-while-revalidate=3600

Secondary: Optimize Database Queries

Slow database queries are the hidden TTFB killer. A single unindexed query can add 500ms+ to every page load.

-- Find slow queries in PostgreSQL
SELECT query, calls, mean_time, total_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;

-- Add indexes for common patterns
CREATE INDEX idx_posts_user_published
ON posts(user_id, published_at)
WHERE status = 'published';
// Use connection pooling - critical for serverless
import { Pool } from 'pg'

const pool = new Pool({
  max: 20,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
})

Tertiary: Edge Computing for Dynamic Content

When you can't cache HTML, move your compute closer to users with edge functions.

// Cloudflare Workers - global edge deployment
export default {
  async fetch(request, env) {
    const data = await env.KV.get('page-data', 'json')
    return new Response(renderPage(data), {
      headers: { 'Content-Type': 'text/html' },
    })
  }
}
// Vercel Edge Functions
export const config = { runtime: 'edge' }

export default async function handler(req) {
  const data = await fetch('https://api.example.com/data')
  return new Response(renderTemplate(await data.json()))
}

Why This Works

TTFB reduction creates a direct 1:1 improvement in LCP because the browser cannot begin any rendering work until HTML arrives. Reducing TTFB from 600ms to 200ms gives the browser an extra 400ms to download CSS, discover images, and paint the LCP element. CDNs solve the physics problem (speed of light latency), while caching solves the compute problem (server processing time). Together, they address the two largest contributors to slow server response.

Framework-Specific Solutions

Next.js: Use App Router with automatic streaming. Deploy to Vercel for built-in edge caching. Configure generateStaticParams for static generation. Add ISR with revalidate for dynamic content that changes infrequently.
export const revalidate = 3600 // ISR every hour
Nuxt: Configure routeRules for per-route caching strategies. Use prerenderRoutes for static paths. Deploy to Cloudflare or Vercel for edge rendering. Enable experimental.componentIslands for partial hydration.
export default defineNuxtConfig({
  routeRules: {
    '/blog/**': { swr: 3600 },
    '/': { prerender: true },
  },
})

Verify the Fix

After implementing changes:

  1. Clear all caches (CDN, server, browser)
  2. Run the curl command from multiple geographic locations
  3. Use WebPageTest with test locations matching your user base
  4. Run Lighthouse and confirm the "Reduce initial server response time" audit passes
  5. Monitor Real User Monitoring (RUM) data for TTFB improvements over 24-48 hours

Expected improvement: Reducing TTFB from 2000ms to 400ms should improve LCP by approximately 1600ms, often the difference between failing and passing Core Web Vitals.

Common Mistakes

Over-caching personalized content: Caching user-specific pages at the CDN edge causes users to see each other's data. Use Cache-Control: private for authenticated pages, or implement edge-side personalization with cookies.

Ignoring cold starts: Serverless functions can add 500ms+ on cold start. Monitor cold start frequency separately from average TTFB. Use provisioned concurrency or edge functions with faster cold starts.

Missing cache invalidation: Aggressive caching without proper invalidation serves stale content. Implement cache tags or surrogate keys for targeted purging when content updates.

TTFB problems often compound with other LCP issues:

Test Your Entire Site

TTFB varies dramatically across pages—your homepage may be fast while product pages with database queries are slow. Run a comprehensive scan to identify which routes have server response problems before they impact real users.

Scan Your Site with Unlighthouse