Skip to main content
Architecture7 min readAugust 22, 2025

Database Per Service: Isolating Data in Distributed Systems

Sharing a database between services seems practical until it isn't. Here's how the database-per-service pattern works and when to adopt it.

James Ross Jr.
James Ross Jr.

Strategic Systems Architect & Enterprise Software Developer

Why Shared Databases Break Down

When two services share a database, they share a schema. When they share a schema, changes to one service's data model risk breaking the other. A seemingly harmless column rename in the orders table cascades into the inventory service. An index added to improve billing performance degrades shipping queries. A schema migration requires coordinating deployments across every service that touches the database.

This coupling defeats the core promise of service-oriented architecture: independent deployability. If you cannot deploy one service without coordinating with three others, you do not have independent services. You have a distributed monolith — all the operational complexity of microservices with none of the organizational benefits.

The database-per-service pattern eliminates this coupling by giving each service exclusive ownership of its data store. No other service reads from or writes to that store directly. All cross-service data access happens through the service's API.


What Database Per Service Actually Looks Like

The pattern is simple in principle: each service owns a database (or schema, or set of tables) that only it can access. Other services that need that data request it through the owning service's API.

In practice, this means a few concrete things:

Each service has its own connection credentials. The orders service cannot connect to the inventory database even if someone wanted it to. This is enforced at the infrastructure level, not just by convention.

Each service manages its own migrations. The orders service's schema evolves on its own timeline. It does not wait for the billing service to be ready for a migration. This is what makes independent deployment possible.

Cross-service queries go through APIs. If the reporting dashboard needs order data and customer data, it calls the orders API and the customers API. It does not run a SQL join across two databases. This is where the pattern introduces friction — and where complementary patterns like CQRS become important.

Services can use different database technologies. The search service might use Elasticsearch. The user profile service might use PostgreSQL. The session service might use Redis. Each service picks the storage technology that fits its access patterns rather than conforming to a single shared database choice.


The Hard Parts

Database per service solves the coupling problem but introduces new ones. Being honest about these trade-offs is essential before adopting the pattern.

Cross-service queries are harder. A SQL join across two tables in one database takes milliseconds. The equivalent across two services requires two API calls, client-side joining, and careful handling of partial failures. For reporting and analytics, this overhead is often unacceptable, which is why most systems that adopt database per service also adopt a separate read-optimized store for cross-cutting queries.

Distributed transactions are gone. When the orders service and the inventory service each have their own database, you cannot wrap both updates in a single transaction. If the order is created but the inventory decrement fails, you have an inconsistency. The saga pattern exists specifically to manage this — replacing ACID transactions with a sequence of local transactions and compensating actions.

Data duplication is inevitable. Services often need reference data from other services. The orders service needs the customer name for order confirmation emails. Rather than calling the customers API on every email send, the orders service typically stores a local copy of the customer name. This duplication must be kept in sync, usually through events. The trade-off is operational complexity for runtime independence.

Operational overhead increases. More databases means more backups, more monitoring, more capacity planning. This is manageable with good infrastructure automation but non-trivial for small teams. If you are running three services, you can probably handle three databases. If you are running thirty, you need mature platform tooling.


When to Adopt It

The database-per-service pattern makes the most sense when you have genuinely independent teams working on genuinely independent services that deploy on independent schedules. If your organization has these characteristics, shared databases will become a coordination bottleneck and the pattern pays for itself in reduced deployment friction.

It makes less sense when a small team owns all the services. If the same three developers maintain the orders service, the inventory service, and the billing service, and they deploy together anyway, database isolation adds complexity without organizational benefit. A modular monolith with clear module boundaries and separate schemas within a single database gives you the logical separation without the operational overhead.

The honest assessment: database per service is a pattern for organizations that have outgrown shared databases, not a starting point for new projects. Start with a well-structured single database. When schema coupling starts blocking independent teams, extract services along with their data. The pattern works best when adopted incrementally — one service at a time — rather than as a big-bang migration.


If you are evaluating whether your system needs database isolation or help designing the data architecture for a service-oriented system, let's talk.


Keep Reading