Your browser just spent 4 seconds doing work on the main thread. During that time? Every user interaction waited in line. The mainthread-work-breakdown audit measures everything the browser does: parsing, compiling, layout, paint, JavaScript execution. When this total exceeds 2 seconds, interactions suffer.
Lighthouse measures total main thread work and scores it on a curve. A score of 100 requires under 2,017ms of main thread time. The median failing site clocks in at 4,000ms. That's 4 seconds where the browser can't fully respond to user input.
The audit breaks work into categories:
| Category | What It Measures |
|---|---|
| Script Evaluation | Running JavaScript, event handlers, timers |
| Script Parsing & Compilation | Parsing and compiling JS files |
| Style & Layout | Recalculating styles, computing layout |
| Rendering | Paint, composite, hit testing |
| Parse HTML & CSS | Parsing markup and stylesheets |
| Garbage Collection | Memory cleanup |
Each category contributes to input delay. When you click a button, the browser must finish its current task before responding. Long tasks directly hurt INP.
Color coding in the flame chart:
If you see large purple blocks during interactions, DOM operations are your bottleneck.
Run Lighthouse and expand the "Minimize main-thread work" audit. It shows time spent in each category. Focus on the largest contributors first.
Layout thrashing occurs when you read layout properties, write to the DOM, then read again. Each read forces the browser to recalculate layout.
// Layout thrashing: 100 forced layouts
elements.forEach((el) => {
const width = el.offsetWidth // Read - forces layout
el.style.width = `${width * 2}px` // Write - invalidates layout
})
// Fixed: batch reads, then batch writes
const widths = elements.map(el => el.offsetWidth) // All reads
elements.forEach((el, i) => {
el.style.width = `${widths[i] * 2}px` // All writes
})
Common properties that trigger layout: offsetWidth, offsetHeight, offsetTop, offsetLeft, scrollTop, scrollHeight, clientWidth, clientHeight, getComputedStyle(), getBoundingClientRect().
Schedule DOM updates to run at the optimal time in the frame:
// Scroll handler causing jank
function onScroll() {
updateParallax()
updateSticky()
updateProgress()
}
// Fixed: throttle with rAF
let ticking = false
function onScroll() {
if (!ticking) {
requestAnimationFrame(() => {
updateParallax()
updateSticky()
updateProgress()
ticking = false
})
ticking = true
}
}
This ensures updates happen once per frame at most, not dozens of times per scroll event.
Tell the browser what doesn't need recalculation:
.card {
contain: layout style;
}
.sidebar {
contain: strict;
}
Containment values:
contain: layout - Element's layout is independentcontain: style - Styles don't leak outcontain: paint - Content won't render outside boundscontain: strict - All of the above plus sizeWith containment, changing one card doesn't force recalculation of other cards.
Skip rendering for content not in the viewport:
.article-section {
content-visibility: auto;
contain-intrinsic-size: auto 500px;
}
The browser skips layout and paint for offscreen sections. The contain-intrinsic-size provides an estimated height to prevent scroll jumping.
Real-world impact: A page with 50 article sections saw rendering work drop from 230ms to 30ms with content-visibility: auto.
Complex CSS selectors are expensive to match:
/* Slow: checks many ancestors */
.sidebar > .nav > ul > li > a.active span.icon {}
/* Fast: direct class match */
.nav-icon-active {}
Also avoid expensive properties during interactions:
box-shadow with large blur radiusfilter: blur() on large elementstransform on elements without will-changeDon't render 10,000 items when only 20 are visible:
// @tanstack/react-virtual
import { useVirtualizer } from '@tanstack/react-virtual'
function VirtualList({ items }) {
const parentRef = useRef(null)
const virtualizer = useVirtualizer({
count: items.length,
getScrollElement: () => parentRef.current,
estimateSize: () => 50,
})
return (
<div ref={parentRef} style={{ height: 400, overflow: 'auto' }}>
<div style={{ height: virtualizer.getTotalSize() }}>
{virtualizer.getVirtualItems().map(virtualRow => (
<div
key={virtualRow.key}
style={{
position: 'absolute',
top: virtualRow.start,
height: virtualRow.size,
}}
>
{items[virtualRow.index].name}
</div>
))}
</div>
</div>
)
}
Virtual scrolling reduces DOM nodes from thousands to dozens.
React.memo to prevent re-renders of unchanged components:const ListItem = React.memo(({ item }) => (
<div className="list-item">{item.name}</div>
))
useMemo for expensive calculations:const sortedItems = useMemo(() =>
items.slice().sort((a, b) => a.name.localeCompare(b.name)), [items])
@tanstack/react-virtual or react-window.computed properties which are cached:<script setup>
const sortedItems = computed(() =>
[...items.value].sort((a, b) => a.name.localeCompare(b.name))
)
</script>
v-memo to skip re-renders:<div v-for="item in items" :key="item.id" v-memo="[item.updated]">
<ExpensiveComponent :item="item" />
</div>
vue-virtual-scroller or @tanstack/vue-virtual.After changes:
Target: Keep total main thread work under 2,000ms. Each major category (script evaluation, style/layout) should be under 500ms.
Reading layout in loops. Every offsetWidth read forces a layout recalculation if the DOM was modified.
Animating layout properties. Animating width, height, top, left triggers layout. Use transform and opacity instead.
Missing contain-intrinsic-size. Without it, content-visibility: auto causes scroll jumps as content renders.
Over-using will-change. Adding will-change: transform to everything consumes memory. Use it sparingly on elements that actually animate.
Main thread work often combines with:
Different pages have different main thread profiles. A product listing page might be style-heavy while a dashboard might be script-heavy.
Unlighthouse scans your entire site and identifies pages with the highest TBT, which correlates with main thread work and INP.