Skip to main content
Engineering7 min readMarch 1, 2026

Scaling Nuxt Content to 400+ Articles: Performance Lessons

Performance challenges and solutions from running 400+ markdown articles through Nuxt Content — build times, query optimization, and keeping the site fast at scale.

James Ross Jr.
James Ross Jr.

Strategic Systems Architect & Enterprise Software Developer

When Content Volume Becomes a Performance Problem

Nuxt Content is excellent for small to medium content sites. It parses markdown files, makes them queryable through a MongoDB-like API, and integrates cleanly with Nuxt 3's rendering pipeline. For a blog with fifty articles, it works without any thought about performance.

At four hundred articles and growing, assumptions that held at smaller scale start to break down. Build times increase. Query performance degrades for certain access patterns. The generated payload grows. Memory usage during builds climbs. None of these are dealbreakers, but each requires attention to keep the site responsive.

This article documents the specific performance challenges I encountered scaling the portfolio site and the solutions that addressed them.

Build Time Optimization

The first challenge was build time. Nuxt Content processes every markdown file during the build phase — parsing frontmatter, rendering markdown to HTML, generating the search index, and creating the content cache. At 400+ files, this processing added significant time to every build.

The most impactful optimization was moving to incremental builds where possible. In development mode, Nuxt Content already supports hot module replacement for individual content files — editing a markdown file rebuilds only that file's output, not the entire content directory. But production builds still process every file.

For production, the solution was build caching. The CI/CD pipeline caches the Nuxt build output between deployments, and Nuxt's build process skips reprocessing files whose source has not changed since the last build. This reduced typical deployment times from minutes to well under a minute for content-only changes.

The Nuxt Content configuration also affects build performance. Disabling features that we do not use — like full-text search indexing for content that is not client-side searchable — reduced the per-file processing overhead. Each file is parsed and rendered, but unnecessary index generation is skipped.

Query Performance

The second challenge was query performance for content listing pages. The blog index page, category pages, and tag pages all query the content API for lists of articles. At 400+ articles, an unoptimized query that fetches all articles with all their metadata and renders a paginated list can be noticeably slow.

The fix was query specificity. Instead of fetching entire article records, listing queries use Nuxt Content's .only() modifier to select only the fields needed for the listing: title, description, date, category, slug, and read time. This reduces the payload per article dramatically — full article bodies, rendered HTML, and unused metadata fields are not transferred.

Pagination limits the number of articles processed per page load. Instead of rendering all 400+ articles on a single page and handling pagination client-side, we paginate at the query level. Each page requests only the 12 or 20 articles it will display, sorted by date. The total article count is queried separately for pagination controls.

Category and tag filtering uses Nuxt Content's .where() modifier to filter server-side rather than fetching all articles and filtering on the client. This is the difference between transferring 20 articles that match the filter versus transferring 400+ articles and discarding 380 of them in the browser.

With 400+ articles, internal linking becomes both more valuable and more challenging to maintain. Each article should link to two or three related articles, creating a web of connections that helps both readers and search engines discover related content.

The challenge is keeping these links valid as articles are added, renamed, or reorganized. A broken internal link — a link to a slug that does not exist — is invisible during normal use but hurts SEO and user experience when someone follows it.

We validate internal links during the build process. A custom build step scans all rendered content for internal links (paths starting with /blog/), collects the target slugs, and verifies that each target exists in the content directory. Any broken links are reported as build warnings, preventing them from reaching production.

This validation is essential at scale. With a handful of articles, you can manually verify links. With hundreds of articles and thousands of internal links, automated validation is the only reliable approach.

Payload Size and Client Performance

Each page load transfers the content needed for that page as a JSON payload. For listing pages, the optimized queries keep this payload small. For individual article pages, the payload includes the rendered HTML for the article body, which can be substantial for longer articles.

The key optimization here is Nuxt's built-in code splitting. Each page route generates its own JavaScript chunk, and content payloads are loaded on demand when the user navigates to a specific article. The initial page load does not include the content for all 400+ articles — it includes only the content for the current page plus the navigation shell.

Preloading improves perceived performance for navigation. When a user hovers over an article link, Nuxt begins prefetching the target page's payload. By the time the user clicks, the content is already loaded or nearly loaded, making navigation feel instant.

Image handling required attention. Articles that include images need those images optimized for web delivery — proper sizing, modern formats, and lazy loading for images below the fold. We use Nuxt Image for automatic optimization, which generates responsive srcsets and converts images to WebP where supported.

Lessons for Content-Heavy Sites

The core lesson from scaling Nuxt Content to 400+ articles is that the framework handles it well, but you need to be intentional about queries and build configuration. The defaults are optimized for smaller sites. At scale, every query should request only the data it needs, every build step should be examined for unnecessary work, and every payload should be verified for size.

The portfolio site consistently scores above 90 on Lighthouse performance audits, even with 400+ articles in the content directory. The optimizations described here are not heroic engineering — they are straightforward applications of the principle that performance at scale requires explicit attention rather than default assumptions. The same principle applies to every production deployment, regardless of the framework.