Skip to main content
Engineering6 min readOctober 5, 2025

Lazy Loading Strategies for Faster Web Apps

Lazy loading defers non-critical resource loading to speed up initial page load. Here are the strategies that actually work and the mistakes that make things worse.

James Ross Jr.
James Ross Jr.

Strategic Systems Architect & Enterprise Software Developer

The Principle Behind Lazy Loading

Lazy loading is based on a simple observation: users do not see or interact with everything on a page at once. A page might contain 40 images, but only 3 are visible in the initial viewport. Loading all 40 images before the page is usable wastes bandwidth, delays rendering, and competes for network resources with critical assets like CSS and JavaScript.

Lazy loading defers the loading of non-visible resources until the user scrolls toward them or until the browser is idle. The result is a faster initial page load, reduced bandwidth usage, and a better experience for users who may never scroll to the bottom of the page anyway.

The concept applies beyond images. You can lazy load JavaScript modules, iframe embeds, entire page sections, and even data fetched from APIs. The strategy differs for each resource type, but the principle is consistent: load what is needed now, defer what is needed later.

The caveat that many performance guides omit: lazy loading is an optimization for below-the-fold content. Lazy loading above-the-fold content — hero images, primary headings, critical UI elements — actively hurts performance because it delays the resources users need to see first. The Largest Contentful Paint metric will penalize you if your primary content element is lazy loaded because the browser waits to load it until after the layout is calculated.


Image Lazy Loading

Native browser lazy loading via the loading="lazy" attribute is the simplest approach and should be your default for below-the-fold images:

<img
 src="product-photo.webp"
 alt="Wireless headphones in matte black finish"
 width="600"
 height="400"
 loading="lazy"
 decoding="async"
/>

The loading="lazy" attribute tells the browser to defer fetching the image until it is near the viewport. The browser determines the distance threshold — typically around 1250px from the viewport edge on fast connections and 2500px on slow connections. The decoding="async" attribute allows the browser to decode the image off the main thread, preventing decode-related jank.

Always include width and height attributes or use CSS aspect-ratio on lazy-loaded images. Without explicit dimensions, the browser cannot reserve space for the image before it loads, causing layout shifts when the image eventually appears. Each layout shift adds to your CLS score and creates a visually jarring experience.

For hero images and above-the-fold content, explicitly opt out of lazy loading:

<img
 src="hero.webp"
 alt="Dashboard analytics overview"
 width="1200"
 height="600"
 loading="eager"
 fetchpriority="high"
/>

The fetchpriority="high" attribute tells the browser to prioritize this image over other resources, improving LCP.

For background images set via CSS, native lazy loading does not apply. Use IntersectionObserver to add the background image class when the element enters the viewport, or restructure to use <img> elements with object-fit: cover instead of CSS backgrounds.


JavaScript and Component Lazy Loading

Modern bundlers split JavaScript into chunks that can be loaded on demand. This is critical for applications with large codebases — shipping a single 2MB JavaScript bundle on initial load guarantees a slow experience. Code splitting loads only the JavaScript needed for the current view.

In Vue and Nuxt, dynamic imports handle component-level code splitting:

const HeavyChart = defineAsyncComponent(() =>
 import('./components/HeavyChart.vue')
);

This component's code is not included in the main bundle. It downloads only when the component is rendered. Nuxt's file-based routing automatically code-splits by page — each page route is a separate chunk loaded on navigation.

For route-based code splitting in React, React.lazy with Suspense provides the same capability:

const Dashboard = React.lazy(() => import('./pages/Dashboard'));

Function App() {
 return (
 <Suspense fallback={<LoadingSkeleton />}>
 <Dashboard />
 </Suspense>
 );
}

The Suspense boundary shows a fallback while the chunk downloads. Use skeleton loaders rather than spinners — skeletons communicate the shape of incoming content and feel faster to users.

Be strategic about split points. Over-splitting creates too many small network requests, and the overhead of each request (DNS, TLS, HTTP headers) can exceed the savings. Split at natural boundaries: route-level chunks, heavy third-party libraries (chart libraries, rich text editors, date pickers), and features behind feature flags or user permissions. A typical application should have 5-15 chunks, not 200.


Data and Infinite Scroll Patterns

Lazy loading applies to data as well as assets. Loading 1000 records from an API on page load when the user sees 20 at a time wastes server resources, increases response time, and may exceed the browser's memory budget on mobile devices.

Pagination is the traditional solution — discrete pages of results with next/previous controls. It is predictable, bookmarkable, and works well for search results and directory listings. The limitation is that navigating between pages requires a full request-response cycle.

Infinite scroll loads additional data as the user scrolls toward the bottom of the list. Use IntersectionObserver on a sentinel element near the bottom of the loaded content:

const observer = new IntersectionObserver(
 (entries) => {
 if (entries[0].isIntersecting && !isLoading.value && hasMore.value) {
 loadNextPage();
 }
 },
 { rootMargin: '200px' }
);

Observer.observe(sentinelElement);

The rootMargin: '200px' starts loading 200px before the sentinel is visible, giving the request time to complete before the user actually reaches the end. This creates a seamless experience where content appears to be infinite.

Infinite scroll has UX tradeoffs. Users cannot bookmark a position, cannot use the browser's back button to return to where they were, and lose their scroll position on page refresh. For content where position matters (search results, article lists), consider "load more" buttons as a middle ground — they provide the benefit of staying on one page without the disorientation of automatic loading.

Virtualization is the strategy for very long lists — rendering only the visible items in the DOM and recycling DOM nodes as the user scrolls. Libraries like TanStack Virtual handle this efficiently. A list of 10,000 items with virtualization renders perhaps 30 DOM nodes at any time, keeping memory usage constant and scroll performance smooth regardless of list length.