Skip to main content
Engineering11 min readMarch 4, 2026

Monorepo Architecture with Turborepo: When It Works and When to Walk Away

A practical guide to monorepo architecture with Turborepo — setup, caching, task pipelines, and an honest look at when a monorepo helps and when it's more pain than it's worth.

James Ross Jr.

James Ross Jr.

Strategic Systems Architect & Enterprise Software Developer

The Case for Putting Everything in One Repo

The monorepo vs. polyrepo debate has been going on for years, and I'm not going to pretend there's a universally correct answer. What I will say is that in projects where multiple packages share types, utilities, or configuration, a monorepo eliminates an entire category of coordination problems that polyrepos force you to solve with tooling, process, and patience.

Google, Meta, and Microsoft all run massive monorepos. That fact alone doesn't make it the right call for your team. What makes it worth considering is what a monorepo actually gives you: atomic changes across packages, a single source of truth for shared code, and unified CI/CD pipelines that test everything together. When your API types and your frontend types are in the same repo, you don't ship a breaking API change and find out about it three hours later when the frontend team pulls the new SDK version.

The problem has always been tooling. Running npm install across 15 packages and then orchestrating builds in the right dependency order is painful without the right tool. That's where Turborepo comes in.


Setting Up Turborepo From Scratch

Turborepo isn't a package manager and it isn't a bundler. It's a build system that sits on top of your existing workspace setup (npm workspaces, pnpm workspaces, or yarn workspaces) and makes task execution fast and correct.

Here's a minimal setup. I'm using pnpm because its workspace support is the most mature and its disk usage is the most efficient, but npm and yarn both work.

Start with your root package.json:

{
  "name": "my-monorepo",
  "private": true,
  "scripts": {
    "build": "turbo run build",
    "dev": "turbo run dev",
    "lint": "turbo run lint",
    "test": "turbo run test",
    "typecheck": "turbo run typecheck"
  },
  "devDependencies": {
    "turbo": "^2.4.0",
    "typescript": "^5.7.0"
  }
}

Your pnpm-workspace.yaml defines which directories contain packages:

packages:
  - "apps/*"
  - "packages/*"

And turbo.json is where the real configuration lives:

{
  "$schema": "https://turbo.build/schema.json",
  "globalDependencies": ["**/.env.*local"],
  "tasks": {
    "build": {
      "dependsOn": ["^build"],
      "outputs": ["dist/**", ".next/**", ".nuxt/**"]
    },
    "dev": {
      "cache": false,
      "persistent": true
    },
    "lint": {
      "dependsOn": ["^build"]
    },
    "test": {
      "dependsOn": ["^build"]
    },
    "typecheck": {
      "dependsOn": ["^build"]
    }
  }
}

The ^ prefix in dependsOn is the key concept. "dependsOn": ["^build"] means "before building this package, build all the packages it depends on." Turborepo reads your workspace dependency graph from package.json files and figures out the correct execution order automatically. You declare what depends on what; Turbo handles the scheduling.


Caching and Task Pipelines: The Real Power

The first time I saw Turborepo replay a cached build in 200 milliseconds that normally took 45 seconds, I understood why this tool exists.

Turborepo hashes the inputs to every task — source files, dependencies, environment variables, the configuration itself — and stores the output. If nothing relevant changed, it replays the cached output instead of running the task again. This sounds simple, but the implications are significant. In a monorepo with 10 packages, changing one package means Turbo only rebuilds that package and its dependents. The other 8 packages get cache hits.

Remote caching takes this further. When one developer builds a package, the cache artifact gets uploaded to a shared cache (Vercel's hosted cache or a self-hosted one). When another developer or your CI pipeline runs the same build with the same inputs, it pulls the cached result instead of rebuilding. In practice, this cuts CI times dramatically — I've seen pipelines drop from 12 minutes to under 3 after enabling remote caching.

Turbo also runs tasks in parallel by default, respecting the dependency graph. If packages A and B don't depend on each other, their builds run simultaneously. This is something you'd have to carefully orchestrate yourself with scripts in a polyrepo.


Shared Packages and Internal Libraries

This is where monorepos earn their keep. Shared code in a polyrepo means publishing packages to a registry (even a private one), managing version numbers, coordinating releases, and dealing with consumers being on different versions. In a monorepo, shared code is just another workspace package with a direct dependency.

Here's a typical shared TypeScript config package at packages/tsconfig/base.json:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "compilerOptions": {
    "strict": true,
    "target": "ES2022",
    "module": "ESNext",
    "moduleResolution": "bundler",
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true,
    "resolveJsonModule": true,
    "isolatedModules": true,
    "declaration": true,
    "declarationMap": true,
    "sourceMap": true
  }
}

Apps extend it:

{
  "extends": "@my-monorepo/tsconfig/base.json",
  "compilerOptions": {
    "outDir": "dist",
    "rootDir": "src"
  },
  "include": ["src"]
}

A shared utilities package is even more useful. Consider packages/shared/src/index.ts:

// packages/shared/src/result.ts
export type Result<T, E = Error> =
  | { ok: true; value: T }
  | { ok: false; error: E }

export function ok<T>(value: T): Result<T, never> {
  return { ok: true, value }
}

export function err<E>(error: E): Result<never, E> {
  return { ok: false, error }
}

// packages/shared/src/validation.ts
export function assertNonEmpty(
  value: string,
  fieldName: string
): Result<string, string> {
  const trimmed = value.trim()
  if (trimmed.length === 0) {
    return err(`${fieldName} cannot be empty`)
  }
  return ok(trimmed)
}

Any app in the monorepo can depend on @my-monorepo/shared and import these directly. No publishing step. No version mismatch. Change the shared code and everything that depends on it rebuilds and retests in the same pipeline. This is the kind of shared code management I wrote about in my enterprise software best practices post — the coordination overhead of shared libraries is one of the biggest hidden costs in multi-repo setups.


When NOT to Use a Monorepo

I'd be doing you a disservice if I only talked about the upside. Monorepos have real downsides, and pretending otherwise leads to painful migrations back to polyrepos six months later.

Small teams with unrelated projects. If your web app and your mobile app share zero code and are built by different people, putting them in the same repo adds complexity without a clear benefit. A monorepo is a tool for managing shared dependencies. If nothing is shared, it's just a folder.

Git performance at scale. Git was designed for single-project repositories. Once a monorepo reaches tens of thousands of files and a deep history, basic operations like git status and git log slow down. Google built its own VCS. Meta uses a custom Mercurial fork. If you're not building custom VCS tooling, you'll hit a ceiling. For most teams this ceiling is high enough that it's not an issue, but it's real.

CI complexity. Running every test on every PR is wasteful when a monorepo gets large. Turborepo's --filter flag helps — you can run only affected packages — but you still need to configure this correctly. Your CI/CD pipeline gets more complex, not simpler. If your team doesn't have someone who understands build systems well, this complexity can slow you down.

Ownership boundaries. In a polyrepo, repository permissions map directly to team ownership. In a monorepo, you need CODEOWNERS files, path-based review rules, and discipline about not reaching into another team's package. This isn't hard to set up, but it requires intention. Good code review practices become even more important when everyone can technically modify everything.

Onboarding friction. New developers clone the entire monorepo even if they only work in one package. Sparse checkouts help but add their own complexity. The initial pnpm install across 20 packages takes longer than installing a single app's dependencies.


A Realistic Workspace Layout

Here's what a well-structured Turborepo monorepo looks like in practice:

my-monorepo/
├── apps/
│   ├── web/                 # Next.js or Nuxt frontend
│   │   ├── package.json     # depends on @my-monorepo/ui, @my-monorepo/shared
│   │   └── ...
│   ├── api/                 # Hono or Express backend
│   │   ├── package.json     # depends on @my-monorepo/shared, @my-monorepo/db
│   │   └── ...
│   └── docs/                # Documentation site
│       └── ...
├── packages/
│   ├── ui/                  # Shared component library
│   ├── shared/              # Types, utils, validation
│   ├── db/                  # Prisma schema + client
│   ├── tsconfig/            # Shared TS configs
│   └── eslint-config/       # Shared lint rules
├── turbo.json
├── pnpm-workspace.yaml
└── package.json

The packages/db pattern is worth highlighting. By putting your Prisma schema and generated client in a shared package, both your API and any background workers can import the same database client with the same types. Schema changes propagate everywhere automatically.


My Recommendation

If you're building a product with a frontend and a backend that share types, or if you have more than two services that depend on common code, a Turborepo monorepo is worth the setup cost. The caching alone will save you hours of CI time per week, and the ability to make atomic changes across packages removes an entire class of "works on my machine" integration bugs.

If you're a solo developer with one app, or a team building genuinely independent services with no shared code, skip it. The overhead isn't justified. Use a simple repo, ship fast, and revisit the decision when your codebase outgrows that setup.

The tooling has gotten good enough that monorepos are no longer a bet reserved for companies with dedicated infrastructure teams. Turborepo, pnpm workspaces, and a well-structured turbo.json will take you surprisingly far before you need anything more sophisticated.

Start small. One shared package. Two apps. See if the workflow fits. You can always add more packages later — that's the whole point.


Keep Reading