Skip to content
Wadhah Belhassen
← All articlesWeb Performance

Caching Strategies for Web Performance: Browser, CDN, Edge, and Beyond

A complete guide to caching strategies — browser cache, CDN cache, edge cache, application cache, and the layered approach that makes modern sites fast.

Wadhah Belhassen2026-12-0412 min read
Caching Strategies for Web Performance: Browser, CDN, Edge, and Beyond

Caching strategies are the silent engine of every fast website. The difference between a slow site and a fast site is rarely the underlying code — it is whether the right things are cached at the right layers with the right invalidation rules.

This guide covers caching end to end. Browser cache, CDN cache, edge cache, application cache, database cache, and the layered approach that makes modern sites consistently fast. By the end you will have a clear framework for caching every type of asset and request on your site.

The work is mostly invisible. Done right, caching is the foundation that lets every other performance optimisation deliver compounding results.

Why caching matters more than people think

Without caching, every page view requires:

  • DNS resolution
  • TCP handshake
  • TLS negotiation
  • Origin server request
  • Database query
  • Template render
  • Asset transfer
  • Browser parse and render

Each step costs latency. With proper caching, most of these steps are skipped for most requests. The result is sites that feel instant.

The performance multiplier is significant. A request that takes 800 ms uncached often takes 30 ms cached. The compound effect across a session is the difference between a great experience and a slow one.

We covered the Core Web Vitals impact in our Core Web Vitals deep dive guide. Caching underlies every CWV improvement.

The caching layers

A modern web stack has multiple caching layers. Each one serves a specific role.

Browser cache

Cached on the user's device. Eliminates network requests for repeat visits.

CDN cache

Cached at edge locations close to users. Serves static assets and (where configured) HTML.

Edge cache

A subset of CDN cache designed for application code running at the edge. Cloudflare Workers KV, Vercel Edge Cache.

Application cache

In-memory or external caches (Redis, Memcached) sitting between your app and database.

Database cache

Query result caching inside the database (Postgres query cache, MySQL query cache).

Each layer reduces load on the layer below. Designed together, they make the request flow look like:

  1. Browser cache hit → serve immediately
  2. Browser cache miss → CDN cache hit → serve from edge
  3. CDN cache miss → edge cache hit → compute at edge, serve
  4. Edge cache miss → application cache hit → serve cached app response
  5. Application cache miss → database cache hit → serve cached query
  6. Database cache miss → execute query, populate caches up the chain

Most requests should resolve at one of the first 2 to 3 layers.

Section 1 — Browser caching

The first cache the user encounters. Free, fast, and underused.

HTTP cache headers

The two main headers:

Cache-Control: tells the browser how long to cache and under what conditions.

Cache-Control: public, max-age=31536000, immutable

This says: anyone can cache, valid for 1 year (31536000 seconds), and the file will never change at this URL.

ETag: a fingerprint of the content. If the file changes, the ETag changes.

ETag: "abc123"

The browser can ask "do you still have abc123?". If yes, the server returns 304 Not Modified without transferring the file.

Cache strategy by asset type

Static assets (JS, CSS, images) with cache-busting filenames:

Cache-Control: public, max-age=31536000, immutable

Cache for a year. The filename includes a hash that changes when content changes (app.abc123.js becomes app.def456.js).

HTML pages:

Cache-Control: public, max-age=0, must-revalidate

Or for ISR-style pages:

Cache-Control: public, s-maxage=3600, stale-while-revalidate=86400

This caches at the CDN for an hour, and allows stale content for 24 hours while revalidating in the background.

API responses:

Cache-Control: private, max-age=60

For user-specific responses that can be cached briefly. private prevents CDN caching but allows browser caching.

Cache-busting via filenames

For JS, CSS, and images, include a content hash in the filename:

app.1a2b3c4d.js
hero.5e6f7g8h.webp

When content changes, the hash changes, the URL changes, the browser fetches the new file. The old URL stays cached forever (immutable).

This is the standard pattern in modern bundlers (Webpack, Vite, esbuild).

Service workers for offline-first caching

For PWA-style apps, service workers cache assets and API responses for offline use:

// service-worker.js
self.addEventListener("fetch", (event) => {
  event.respondWith(
    caches.match(event.request).then(response => {
      return response || fetch(event.request);
    })
  );
});

Useful for apps that need to work offline or that benefit from instant repeat-visit loads.

Section 2 — CDN caching

The second layer. Serves cached content from edge locations close to users.

What a CDN caches by default

Static files (.js, .css, images, fonts) are cached automatically by most CDNs. Cache headers from the origin server tell the CDN how long to cache.

HTML pages are not cached by default. Most CDNs only cache HTML when configured.

HTML caching at the CDN

For static or ISR-style pages, configure the CDN to cache HTML:

Cache-Control: public, s-maxage=3600, stale-while-revalidate=86400

s-maxage is the CDN-specific max-age (distinct from browser max-age). This caches HTML at the edge for an hour.

stale-while-revalidate allows serving stale content while the CDN regenerates in the background. Users almost never see slow regenerations.

CDN cache invalidation

When content changes, you need to invalidate the cache. Methods:

  • Time-based: cache expires naturally on its TTL
  • Manual purge: invalidate specific URLs via API
  • Tag-based: invalidate by tag (Vercel revalidateTag, Cloudflare cache tags)
  • Path-based: invalidate by URL pattern (revalidatePath("/blog/*"))

For ISR setups, on-demand revalidation tied to CMS webhooks is the cleanest pattern.

CDN selection

We covered CDN selection in detail in our CDN selection guide. The short version: Cloudflare, Vercel, and Bunny CDN cover most SME needs. Fastly and AWS CloudFront are enterprise options.

Section 3 — Edge caching for dynamic content

Edge compute lets you run app code at edge locations, with edge-specific caching.

Vercel Edge Cache

Vercel automatically caches Edge Functions, ISR pages, and static assets at the edge. Configuration is in code, not in dashboards:

export const revalidate = 3600;

This caches at the edge for 1 hour. Background regeneration handles updates.

Cloudflare Workers KV

For Cloudflare Workers, KV is the key-value store for edge cache:

const cached = await KV.get(cacheKey);
if (cached) return new Response(cached);

const fresh = await fetchFromOrigin();
await KV.put(cacheKey, fresh, { expirationTtl: 3600 });
return new Response(fresh);

Useful for caching API responses or computed values at the edge.

Edge vs CDN cache

Edge cache runs code; CDN cache serves files. For dynamic content that varies (geolocation-based pricing, A/B testing variants), edge compute with edge cache is the right pattern.

For pure static content, CDN cache is enough.

Section 4 — Application caching

For data and computations that are slower than they should be.

Redis for shared cache

For most apps that need a cache layer, Redis is the standard:

const cached = await redis.get(`user:${id}`);
if (cached) return JSON.parse(cached);

const user = await db.user.findUnique({ where: { id } });
await redis.set(`user:${id}`, JSON.stringify(user), "EX", 3600);
return user;

Cache user data, computed values, expensive query results. TTL of 1 to 24 hours covers most cases.

Next.js unstable_cache

For Next.js apps, unstable_cache provides a built-in caching primitive:

import { unstable_cache } from "next/cache";

const getUserData = unstable_cache(
  async (id) => db.user.findUnique({ where: { id } }),
  ["user-data"],
  { revalidate: 3600, tags: [`user-${id}`] }
);

Tag-based invalidation works with revalidateTag(\user-$`)` from API routes or server actions.

Memoization in code

For frequently-computed values, in-memory memoization is the fastest cache:

const cache = new Map();

function expensiveComputation(input) {
  if (cache.has(input)) return cache.get(input);
  const result = /* expensive work */;
  cache.set(input, result);
  return result;
}

Useful for pure functions called repeatedly with the same inputs. Watch memory growth — bounded caches like lru-cache prevent leaks.

Section 5 — Database caching

The deepest cache layer. Query result caching at the database level.

Postgres query plan caching

Postgres caches query plans automatically. Repeated identical queries reuse plans, saving parse and plan time.

For maximum benefit:

  • Use parameterised queries instead of string concatenation
  • Use prepared statements where possible
  • Avoid generating dynamic query structures that prevent reuse

Materialised views for expensive aggregations

For aggregations that are expensive to compute but do not need real-time freshness, materialised views pre-compute and store results:

CREATE MATERIALIZED VIEW daily_sales AS
SELECT date_trunc('day', created_at) AS day, SUM(amount) AS total
FROM orders
GROUP BY day;

REFRESH MATERIALIZED VIEW daily_sales;

Refresh on schedule or on data change events. Queries against the view are fast.

Read replicas with caching

For read-heavy workloads, route reads to replica databases. Combine with application-level Redis cache to reduce replica load.

This is enterprise-tier optimization. Most SMEs do not need it. For sites with 100K+ daily users, it can be significant.

Section 6 — Cache invalidation

The hard problem of caching. Two main approaches.

TTL-based (time to live)

The cache expires automatically after a set time. Simple but produces brief windows of staleness.

Cache-Control: public, max-age=3600

Acceptable for most content. Set TTL to match the staleness you can tolerate.

Event-driven invalidation

When data changes, invalidate the cache immediately. Requires plumbing between data layer and cache layer.

Patterns:

  • Webhook from CMS → API route → revalidatePath or revalidateTag
  • Database trigger → message queue → cache invalidation worker
  • Application code → invalidate on save

Event-driven invalidation gives near-real-time freshness. More complex to maintain.

Hybrid: stale-while-revalidate

Cache-Control: public, s-maxage=60, stale-while-revalidate=3600

Serve cached content for 60 seconds. After 60 seconds, serve stale content while regenerating in the background. Combines fast delivery with bounded staleness.

This is often the best pattern for SME sites — set short s-maxage, generous stale-while-revalidate.

Section 7 — Caching pitfalls

Caching is one of the two hard problems in computing. Common ways it breaks.

Caching personalised content publicly

Setting Cache-Control: public on a page with personalised content means one user's cached page can be served to another user.

Use private for any per-user content. Verify before deploying caching changes.

Stale content after data changes

Forgetting to invalidate cache after content updates. Users see old data.

Build cache invalidation into your content workflows from day one. Add it after the fact and you will have stale-content bugs forever.

Cache poisoning

A malicious request triggers caching of a bad response, which then gets served to other users. Especially dangerous for API responses cached at the CDN.

Mitigate by:

  • Validating cache keys carefully
  • Using cache headers for varying contexts (Vary: Accept-Language, Vary: Cookie)
  • Avoiding caching responses that contain user-controllable content unless properly sanitised

Over-aggressive caching during development

A 1-year cache on a file you are actively editing in development means changes do not appear. Use shorter cache times locally or disable cache in dev tools.

Cache stampede

When a cached resource expires and 100 concurrent requests all try to regenerate it simultaneously. Backend gets overwhelmed.

Mitigate with:

  • Stale-while-revalidate (serve stale while one request regenerates)
  • Background revalidation (ISR pattern)
  • Lock-based regeneration (only one regeneration at a time, others wait)

A layered caching strategy for a typical site

Putting it together for a typical content + e-commerce hybrid:

Marketing pages (homepage, about, contact):

  • SSG with 1-year browser cache on assets
  • HTML on CDN with stale-while-revalidate for 1 day
  • Background revalidation on content publish

Blog posts:

  • ISR with 1-day revalidate
  • Browser cache HTML for 1 hour
  • CDN cache HTML for 1 day
  • On-demand revalidation when blog post updates

Product pages:

  • ISR with 1-hour revalidate
  • Browser cache HTML for 5 minutes
  • CDN cache HTML for 1 hour
  • On-demand revalidation on price/inventory change

Cart and checkout:

  • SSR, no cache
  • Application-level cache for product catalog data (Redis, 5-minute TTL)

API responses:

  • Static catalog data: cache aggressively
  • User-specific data: short cache or no cache
  • Real-time data: no cache

A 30-day caching implementation plan

If your site lacks systematic caching, follow this sequence.

Days 1 to 4 — Audit. Check current cache headers on all asset types. Identify pages without caching.

Days 5 to 10 — Static assets. Set max-age=31536000, immutable on hashed JS/CSS/image filenames. Verify CDN cache hit rate.

Days 11 to 16 — HTML caching. Add ISR or stale-while-revalidate to appropriate pages. Move static-friendly pages from SSR to SSG.

Days 17 to 22 — Application cache. Add Redis or unstable_cache for expensive queries. Cache user data, computed values, frequently-fetched API data.

Days 23 to 27 — Invalidation. Set up webhook-triggered revalidation for CMS-driven content. Test the invalidation paths.

Days 28 to 30 — Measure. Compare TTFB, LCP, and hosting cost. Verify cache hit rates in CDN dashboard.

Most sites see 30 to 60 percent TTFB improvement and 20 to 40 percent hosting cost reduction in this window.

Frequently asked questions

What is the difference between browser cache and CDN cache?

Browser cache lives on the user's device. CDN cache lives on edge servers between origin and user. Browser cache helps repeat visitors. CDN cache helps everyone.

How long should I cache static assets?

For assets with content-hashed filenames (app.abc123.js), 1 year with immutable. For assets without hashed filenames, shorter (1 hour to 1 day) and rely on ETag for revalidation.

Should I cache HTML?

For static or ISR-rendered HTML, yes. Cache at the CDN with s-maxage. Use stale-while-revalidate to serve stale content while regenerating. Personalised HTML should not be cached publicly.

What is stale-while-revalidate?

A cache directive that lets the cache serve stale content while the cache regenerates in the background. Users almost never see slow regenerations.

Do I need Redis if I'm on Vercel?

Not necessarily. Vercel's built-in unstable_cache and KV store cover most caching needs for typical apps. Add Redis for very high-frequency access patterns or shared cache across services.

How do I know if my caching is working?

Check Lighthouse for cache headers. Check CDN dashboard for cache hit rate (aim for 90+ percent on static assets). Check application performance for repeated query latency (cached should be 10x faster than uncached).

Get a caching audit

We audit caching layers free of charge. Within 48 hours we deliver a layer-by-layer breakdown of what is cached, what is not, and the highest-leverage opportunities.

Book a free 30-minute audit. We screen-share, walk through your cache headers and CDN settings, and you leave with a clear action plan.

Or explore our Web Development service for the full system we run on performance-focused client accounts.

Want these strategies applied to your business?

30 minutes of free audit with concrete recommendations tailored to your business.