Fix Long-Running JavaScript for Better INP

Reduce JavaScript execution time to improve INP. Learn code splitting, web workers, and task chunking to keep the main thread responsive.
Harlan WiltonHarlan Wilton8 min read Published

Any JavaScript task over 50ms blocks user input. The median site has 3.5 seconds of JavaScript execution time. That's 70 opportunities for users to experience delayed interactions.

Lighthouse's bootup-time audit measures total script evaluation and parse time. Sites scoring green keep execution under 1.3 seconds. Red flags appear above 3.5 seconds. But even well-scoring sites can have poor INP if that time isn't broken into small chunks.

TBTTotal Blocking Time
30% weight
Good ≤200msPoor >600ms

What's the Problem?

Browsers run JavaScript on the main thread. The same thread handles user input, layout calculations, and painting. When JavaScript monopolizes this thread, everything else waits.

A click handler can't fire. A scroll can't register. A keystroke goes unacknowledged. The user sees a frozen interface.

The 50ms threshold matters. Human perception research shows delays under 100ms feel instantaneous. At 50ms task length, the browser has 50ms buffer to respond before users notice lag. Longer tasks eat into that buffer.

How Lighthouse calculates this: The bootup-time audit sums all script evaluation and parsing time for scripts exceeding 50ms total execution. It applies CPU throttling (4x on mobile simulation) to reflect real device performance.

Common Causes

Large bundles downloading and parsing at once. A 500KB JavaScript file doesn't just download slowly. It also blocks the main thread during parsing and compilation. V8 can parse ~1MB/second on mobile, meaning that bundle blocks for 500ms before a single line executes.

Synchronous data processing. Filtering 10,000 items in a single loop. Sorting a large dataset. Transforming nested JSON structures. Each operation keeps the main thread busy.

Heavy initialization. Third-party libraries that run expensive setup code on load. Analytics packages building user profiles. Feature detection libraries testing browser capabilities.

Unoptimized framework hydration. React, Vue, and Svelte all need to "wake up" server-rendered HTML. Poorly structured component trees mean thousands of elements traversed synchronously.

How to Identify

Chrome DevTools Performance Panel

  1. Open DevTools (F12)
  2. Go to Performance tab
  3. Click record, interact with your page, stop recording
  4. Look at the Main thread row
  5. Tasks with red corners exceed 50ms

Long yellow blocks labeled "Evaluate Script" or "Parse Script" indicate JavaScript execution. Click any task to see the call stack and identify the responsible script.

Lighthouse Treemap

Run Lighthouse, then click "View Treemap" on the bootup-time audit. You'll see execution time broken down by script URL. The largest blocks are your optimization targets.

Web Vitals Attribution

import { onINP } from 'web-vitals/attribution'

onINP((metric) => {
  const { inputDelay, processingDuration, presentationDelay } = metric.attribution
  // inputDelay > 100ms means main thread was busy when user interacted
  // processingDuration > 100ms means the handler itself ran too long
  console.log({ inputDelay, processingDuration, presentationDelay })
})

The Fix

1. Code Split Your Bundles

Don't load everything upfront. Split code by route and by feature.

// Before: loads entire admin module on every page
import { AdminDashboard, AdminUsers, AdminSettings } from './admin'

// After: loads each component on demand
const AdminDashboard = () => import('./admin/Dashboard')
const AdminUsers = () => import('./admin/Users')
const AdminSettings = () => import('./admin/Settings')

Route-based splitting is table stakes. Feature-based splitting targets the long tail: modals, dropdowns, charts that most users never see.

// Load chart library only when user opens analytics
async function showAnalytics() {
  const { Chart } = await import('chart.js')
  const chart = new Chart(canvas, config)
}

2. Move Heavy Work to Web Workers

Web Workers run on a separate thread. They can't access the DOM, but they're perfect for data processing.

// main.js
const worker = new Worker(new URL('./processor.worker.js', import.meta.url))

function processData(items) {
  return new Promise((resolve) => {
    worker.postMessage({ items })
    worker.onmessage = e => resolve(e.data.result)
  })
}

// processor.worker.js
self.onmessage = (e) => {
  const { items } = e.data
  // This runs on a separate thread - main thread stays responsive
  const result = items.map(expensiveTransform).filter(predicate).sort(compareFn)
  self.postMessage({ result })
}

Libraries like Comlink simplify Worker communication with a function-call API.

3. Yield to the Main Thread

When you can't avoid main thread work, yield periodically to let the browser handle interactions.

// Before: blocks for entire loop duration
function processItems(items) {
  items.forEach(item => expensiveOperation(item))
}

// After: yields every 100 items
async function processItems(items) {
  const chunkSize = 100
  for (let i = 0; i < items.length; i += chunkSize) {
    const chunk = items.slice(i, i + chunkSize)
    chunk.forEach(item => expensiveOperation(item))
    await scheduler.yield()
  }
}

Why scheduler.yield() over setTimeout(0): Both yield control, but setTimeout moves your task to the back of the queue. Other scripts can cut in line. scheduler.yield() maintains your position while still letting the browser handle urgent work like user input.

// Polyfill for browsers without scheduler.yield
if (!globalThis.scheduler?.yield) {
  globalThis.scheduler = {
    ...globalThis.scheduler,
    yield: () => new Promise(resolve => setTimeout(resolve, 0))
  }
}

4. Use requestIdleCallback for Non-Critical Work

Analytics, prefetching, and background sync don't need to run immediately.

function initAnalytics() {
  if ('requestIdleCallback' in window) {
    requestIdleCallback(() => {
      loadAnalyticsSDK()
      sendPageView()
    }, { timeout: 5000 }) // Give up after 5 seconds
  }
  else {
    // Fallback: delay 1 second
    setTimeout(loadAnalyticsSDK, 1000)
  }
}

5. Check for Pending Input

The isInputPending() API lets you yield only when users are actually waiting.

async function processLargeDataset(data) {
  for (let i = 0; i < data.length; i++) {
    processItem(data[i])

    // Only yield if user is trying to interact
    if (i % 100 === 0 && navigator.scheduling?.isInputPending()) {
      await scheduler.yield()
    }
  }
}

Framework-Specific Solutions

Next.jsUse dynamic() for components not needed at initial render:
import dynamic from 'next/dynamic'

const HeavyChart = dynamic(() => import('./Chart'), {
  loading: () => <ChartSkeleton />,
  ssr: false // Skip server rendering for client-only components
})
React 18's useTransition marks updates as non-urgent:
const [isPending, startTransition] = useTransition()

function handleFilter(query) {
  startTransition(() => {
    // This state update won't block typing
    setFilteredResults(filterData(query))
  })
}
NuxtLazy-load components with the Lazy prefix:
<template>
  <LazyHeavyChart v-if="showChart" :data="chartData" />
</template>
Use callOnce() to prevent duplicate initialization:
<script setup>
await callOnce(async () => {
  // Runs once, even across hydration
  await initializeExpensiveFeature()
})
</script>
ReactuseDeferredValue lets expensive renders happen during idle time:
import { useDeferredValue } from 'react'

function SearchResults({ query }) {
  const deferredQuery = useDeferredValue(query)

  // Filtering runs when browser is idle
  const results = useMemo(
    () => items.filter(item => item.name.includes(deferredQuery)),
    [deferredQuery]
  )

  return <ResultsList items={results} />
}

Verify the Fix

  1. Run Lighthouse before and after. Check bootup-time and TBT metrics.
  2. Record a Performance trace during interaction. Confirm no tasks exceed 50ms.
  3. Test on throttled CPU (4x slowdown) to simulate mobile devices.

Expected improvement: Breaking a 500ms task into 50ms chunks can reduce INP from 500ms to under 100ms. The math is direct: you've eliminated 450ms of main thread blocking.

Common Mistakes

Yielding too frequently. Each yield has ~4ms overhead. Yielding after every array item turns a 100ms task into a 400ms series of microtasks.

Using setTimeout for everything. setTimeout(fn, 0) still blocks if the callback runs too long. You need to yield inside the callback, not before it.

Optimizing the wrong scripts. Profile first. That tiny utility library isn't your problem. The 800KB charting library is.

Ignoring third-party scripts. Your code might be clean, but that embedded chat widget runs a 200ms initialization. Third-party scripts need the same scrutiny.

Often appears alongside:

Test Your Entire Site

JavaScript execution varies wildly by page. Your homepage might be fast while your checkout flow runs 5 seconds of script. Unlighthouse scans every page and surfaces the worst offenders.