Core Web Vitals Deep Dive: LCP, INP, and CLS Explained
A technical deep dive into Core Web Vitals — what LCP, INP, and CLS actually measure, the thresholds that matter, and the diagnostic and fix patterns for each.

Core Web Vitals deep dive starts with a hard truth — most "performance optimization" advice you read online is generic and lift-marketing-grade. To actually move LCP, INP, and CLS on a real site, you need to understand exactly what each metric measures, where it commonly leaks, and the specific fixes that move each one.
This guide is the technical deep dive we apply on every web performance engagement. It covers each of the three Core Web Vitals — Largest Contentful Paint, Interaction to Next Paint, and Cumulative Layout Shift — in technical depth. The diagnostic order, the common leaks, and the fix patterns that consistently move metrics on real production sites.
The work is precise. Done right, most "poor"-scoring sites can move into the "good" range across all three metrics in 30 to 60 days.
What Core Web Vitals are
Core Web Vitals are Google's three specific metrics for measuring perceived loading speed and responsiveness. Each one captures a different dimension of the user experience.
- LCP (Largest Contentful Paint): when does the main content actually render
- INP (Interaction to Next Paint): how responsive is the page to user input
- CLS (Cumulative Layout Shift): how much does content unexpectedly move during load
Google uses these for ranking signals, and they correlate strongly with real-world conversion behaviour. We covered the conversion side in our page speed and conversion rate impact guide. This guide goes deeper on the technical mechanics.
Largest Contentful Paint (LCP)
LCP measures the time from page navigation to when the largest visible content element finishes rendering.
What counts as the LCP element
The LCP element is the largest visible block of content visible in the viewport. Candidates:
<img>elements<image>elements inside SVG<video>poster image- Background images loaded via
url()in CSS - Block-level text elements (paragraphs, headings, lists)
In Chrome DevTools, the Performance tab shows you exactly which element your browser identified as the LCP. This is the first thing to check — if the LCP element is not what you expected, your fix priorities are wrong.
Thresholds
- Good: under 2.5 seconds
- Needs improvement: 2.5 to 4 seconds
- Poor: over 4 seconds
Mobile thresholds matter more than desktop because most traffic is mobile and mobile networks are typically slower.
The LCP timing breakdown
LCP is composed of four phases. Each one can be a leak.
- Time to first byte (TTFB): server processing time. Should be under 200 ms.
- Resource load delay: time between TTFB and when the LCP resource starts downloading. Should be near zero.
- Resource load duration: actual download and parse time for the LCP resource.
- Element render delay: time between resource ready and element painted on screen.
Optimising LCP means identifying which phase is the bottleneck and fixing that one.
Common LCP leaks
Unoptimised hero images. A 2 MB JPEG hero on mobile is the single most common LCP issue. Compress to WebP at 200 to 400 KB.
Render-blocking resources. CSS and JavaScript in the <head> block rendering until parsed. Inline critical CSS, defer non-critical scripts.
Slow TTFB. Cheap hosting often has 800+ ms TTFB. No front-end fix saves slow servers.
Web font swap delay. Custom fonts that block text rendering delay LCP for text-based LCP elements. Use font-display: swap.
No fetchpriority="high" on LCP image. Free 200 to 800 ms LCP improvement.
Fix order for LCP
- Identify the LCP element (Chrome DevTools)
- Add
fetchpriority="high"if it's an image - Compress and resize the LCP resource
- Preload critical resources (fonts, hero images)
- Inline critical CSS
- Defer non-critical JavaScript
- Optimise server response time (caching, hosting upgrade)
Interaction to Next Paint (INP)
INP measures the longest interaction delay across the entire page session. It replaced First Input Delay (FID) in March 2024.
What INP measures
When a user taps, clicks, or presses a key, the browser must:
- Process the input event
- Run any JavaScript handlers
- Update the DOM
- Recalculate styles, layout, and paint
- Display the new frame
INP is the time from the input to the next visible frame. The metric tracks the worst interaction in the session, not the average.
Thresholds
- Good: under 200 milliseconds
- Needs improvement: 200 to 500 ms
- Poor: over 500 ms
These thresholds are tighter than they look. 500 ms feels noticeably sluggish to users.
Why INP matters more than people think
LCP measures load. INP measures the rest of the session. A page that loads fast but feels sluggish to interact with bleeds conversions on forms, checkouts, and product configurators.
For interactive sites — anything beyond static content — INP often matters more than LCP for actual user experience.
Common INP leaks
Long-running JavaScript handlers. A click handler that processes 1,000 items synchronously blocks the main thread for 200+ ms.
Heavy React re-renders. A state change that triggers re-render of a deep component tree.
Synchronous third-party scripts. Analytics or chat widgets that run on every interaction.
Inefficient DOM operations. Reading layout values inside loops, causing forced reflows.
Unthrottled scroll or input handlers. A scroll listener that runs expensive logic on every scroll event.
Fix patterns for INP
Break up long tasks. Any task over 50 ms is a "long task" by Google's definition. Split into smaller chunks using requestIdleCallback or setTimeout.
Use web workers for CPU-heavy work. Anything that does not need DOM access can run off the main thread.
Debounce and throttle event handlers. Especially scroll, resize, and input handlers.
Optimise React renders. Use React.memo, useMemo, useCallback to prevent unnecessary re-renders. Memoise expensive computations.
Defer non-critical script execution. Use defer attribute, <script type="module">, or runtime checks like requestIdleCallback.
Audit third-party scripts. Each script can add INP delay on every interaction. Audit and defer.
INP diagnostic tools
- Chrome DevTools Performance tab: record interactions, see which tasks blocked
- PageSpeed Insights field data: shows real user INP
- Long Animation Frames API: identifies which scripts caused INP issues
The Chrome DevTools Performance tab is the most powerful tool. Record a session, perform the interactions that feel slow, and the timeline shows exactly where time was spent.
Cumulative Layout Shift (CLS)
CLS measures how much visible content moves around during page load.
What CLS measures
When an element shifts position after it has been rendered, that's a layout shift. CLS sums up the impact of each shift, weighted by:
- How much of the viewport the shifting elements cover
- How far they moved
A score from 0 (perfect) to theoretically unlimited. Real-world bad sites score 0.5 to 1.0.
Thresholds
- Good: under 0.1
- Needs improvement: 0.1 to 0.25
- Poor: over 0.25
CLS is measured across the entire page session, not just initial load. Late-loading ads or content that shifts can ruin a previously-good CLS score.
Why CLS matters
High CLS causes accidental clicks. A user reaches for a button that just shifted up, and they end up clicking the ad that appeared underneath. This is the silent killer of conversions on layout-shifting sites.
Common CLS leaks
Images without dimensions. When an image loads without width and height attributes, the browser reserves no space. When the image finishes loading, content below it jumps down.
Ads or iframes loaded dynamically. Same issue — content shifts when the ad slot fills in.
Web fonts swapping. Custom font loads, replacing fallback font with slightly different size. Surrounding text shifts.
Dynamically injected content. Banners, popups, or notifications inserted into the DOM cause shifts.
Animations that change layout. CSS transitions on width, height, or margin cause continuous shifts.
Fix patterns for CLS
Always specify width and height on images and videos. Or use aspect-ratio CSS.
<img src="hero.webp" width="1200" height="630" alt="...">
The browser reserves space immediately, no shift when the image loads.
Reserve space for ads and iframes. Use CSS to set minimum height on ad slots.
Use font-display: optional for non-critical fonts. Or use size-adjust, ascent-override, and descent-override to match fallback font metrics.
Avoid inserting content above existing content. If you must add a banner, insert at the top with reserved space, not pushed in dynamically.
Use CSS transforms instead of layout properties for animations. transform: translateY() does not cause layout shift; top: 10px does.
CLS diagnostic
Chrome DevTools Performance tab shows layout shifts on the timeline. Record a session, watch for the red "Layout Shift" markers, and click each one to see which element shifted.
Field data vs lab data
Two types of Core Web Vitals data exist. Both matter, but for different reasons.
Lab data (synthetic tests)
Lighthouse, PageSpeed Insights "Lab" results, WebPageTest. These run synthetic tests under controlled conditions.
- Pros: reproducible, fast feedback, useful for debugging
- Cons: do not reflect real user conditions
Field data (real users)
Chrome User Experience Report (CrUX). Aggregated from real Chrome users who opted into data sharing.
- Pros: reflects actual user experience
- Cons: requires sufficient traffic, lagged by 28 days
Google uses field data for rankings. Optimise lab data to fix issues, monitor field data to verify the fixes are working in production.
We covered the lab-vs-field nuance in our page speed and conversion rate impact guide. The relationship matters because lab fixes do not always translate cleanly to field improvements.
The Core Web Vitals diagnostic order
When auditing a site for Core Web Vitals, follow this order.
Step 1 — Pull field data
PageSpeed Insights for the page in question. Note CrUX field data for LCP, INP, and CLS on both mobile and desktop. This tells you the real-user picture.
Step 2 — Identify the worst metric
If all three are good, you are done. If one is poor and two are good, fix the poor one. If multiple are poor, start with the one that affects user experience most for your site type:
- E-commerce: LCP (load speed) and INP (checkout responsiveness)
- Content sites: LCP and CLS
- Web apps: INP
Step 3 — Diagnose the specific element or interaction
Use Chrome DevTools to identify the LCP element, the slow interaction, or the shifting element. The fix depends on which element is affected.
Step 4 — Fix in order of leverage
For LCP, the order is typically: image optimisation > render-blocking resources > server time > preloading.
For INP, the order is: split long tasks > defer non-critical scripts > optimise React renders > web workers for heavy computation.
For CLS, the order is: image dimensions > reserved ad slots > font swap fixes > animation refactoring.
Step 5 — Measure in lab and field
Run Lighthouse after each fix to verify lab improvement. Wait 28 days for CrUX field data to confirm real-world improvement.
A 30-day Core Web Vitals optimization plan
If your site is failing Core Web Vitals, follow this sequence.
Days 1 to 3 — Baseline. Pull PSI field data and lab data for top 5 pages. Identify the worst metric per page.
Days 4 to 10 — LCP work. Image compression, fetchpriority="high", preload critical fonts, defer non-critical scripts.
Days 11 to 17 — CLS work. Add dimensions to all images. Reserve space for ads and dynamic content. Fix font swap issues.
Days 18 to 24 — INP work. Identify the slowest interactions. Split long tasks. Optimise heavy event handlers.
Days 25 to 28 — Verify. Re-run Lighthouse. Check that lab metrics improved.
Days 29 to 30 — Plan field-data monitoring. Set up ongoing CrUX checks to verify field improvement over the following 28-day window.
Expected outcome: most sites move from "poor" to "good" on at least 2 of 3 metrics in this window. The third metric typically improves over the next 30 to 60 days as deeper architectural fixes ship.
A real example — Next.js e-commerce site
We took over a Next.js e-commerce site failing all three Core Web Vitals on mobile. LCP at 4.8 seconds, INP at 680 ms, CLS at 0.31.
Audit revealed: 2.4 MB hero images, no fetchpriority on LCP image, 4 third-party scripts blocking the main thread, images without explicit dimensions, web font swap causing text shift.
After 28 days — image compression (2.4 MB → 280 KB), fetchpriority="high" on LCP image, scripts deferred, dimensions added to every image, size-adjust on web fonts — metrics moved to LCP 2.1s, INP 180 ms, CLS 0.04. Conversion rate lifted 31 percent in the following 60 days. The full story is in our Marseille cosmetics case study.
Common Core Web Vitals mistakes
These are the patterns we see most often.
Optimising Lighthouse Performance Score instead of Core Web Vitals. Performance Score is a weighted average that includes metrics Google does not use for ranking. Focus on LCP, INP, CLS directly.
Adding lazy-loading to the LCP image. The LCP image should load immediately, not lazily. Use loading="eager" (or omit the attribute) on the LCP image.
Trusting lab data only. Lab data can miss real-world issues. Always cross-check with CrUX field data.
Adding fetchpriority="high" to multiple images. It is a "first among equals" hint. If you mark everything as high priority, nothing is high priority.
Ignoring INP because it's new. INP replaced FID in 2024. Sites still optimising for FID are 2 years behind.
Animating layout properties. Causes CLS spikes. Use transforms instead.
Frequently asked questions
What is the most important Core Web Vital?
For most sites, LCP has the biggest user-experience impact and is easiest to move. For interactive sites, INP often matters more. CLS has the smallest direct impact but the highest annoyance factor.
How long until Core Web Vitals fixes show in PageSpeed Insights?
Lab data updates immediately. Field data (CrUX) updates with a rolling 28-day window. A fix shipped today shows in field data starting in about 4 weeks.
Do Core Web Vitals affect SEO rankings?
Yes, as part of the page experience signals. The effect is one ranking factor among many. Sites that go from poor to good Core Web Vitals typically see 2 to 10 percent ranking improvements on competitive queries.
What is the difference between Lighthouse Performance Score and Core Web Vitals?
Lighthouse Performance Score is a weighted average of multiple metrics including some Google does not use for ranking. Core Web Vitals are the three specific metrics (LCP, INP, CLS) used in ranking signals.
Can I optimise Core Web Vitals without a developer?
For simple sites on managed platforms (Shopify, modern WordPress), yes — built-in optimization handles most of the work. For custom sites, deep optimization requires developer involvement.
Why does my Core Web Vitals score differ between PageSpeed Insights and Chrome DevTools?
PSI lab uses simulated throttling on a server-side test. Chrome DevTools runs on your local machine with your local network. CrUX field data shows real users. All three can produce different numbers.
Get a Core Web Vitals audit
We audit Core Web Vitals free of charge. Within 48 hours we deliver a per-metric breakdown of leaks, fixes, and expected impact on lab and field scores.
Book a free 30-minute audit. We screen-share, walk through your top pages and competitor benchmarks, and you leave with a clear action plan.
Or explore our Web Development service for the full system we run on performance-focused client accounts.
Want these strategies applied to your business?
30 minutes of free audit with concrete recommendations tailored to your business.
Read next
The Lighthouse Audit Checklist: 50 Points We Check on Every Site
A comprehensive Lighthouse audit checklist — performance, accessibility, best practices, SEO. The 50-point list we run on every web performance engagement.
Third-Party Script Management: How to Stop Tags From Killing Your Site
A guide to managing third-party scripts — Google Tag Manager, chat widgets, analytics, marketing pixels. Strategies for deferring, replacing, and removing scripts.
Web Fonts Performance: Subsetting, font-display, and Preloading
A technical guide to web fonts performance — formats, subsetting, font-display, preloading, variable fonts, and the patterns that eliminate FOIT and FOUT.