Multi-Tenant Strategy for BastionGlass: Isolation vs Shared Resources
How I designed BastionGlass's multi-tenant architecture — the trade-offs between tenant isolation and shared infrastructure, and the hybrid approach we landed on.
James Ross Jr.
Strategic Systems Architect & Enterprise Software Developer
The Multi-Tenancy Spectrum
Multi-tenancy is not a binary choice. It is a spectrum with full isolation on one end — every tenant gets their own database, their own application instance, their own infrastructure — and full sharing on the other, where all tenants share everything and are separated only by application logic.
For BastionGlass, the position on this spectrum had to balance three constraints: cost efficiency for small auto glass shops that cannot absorb high infrastructure fees, data security for businesses handling customer PII and insurance information, and operational simplicity for a small engineering team that cannot manage hundreds of isolated deployments.
The initial architecture landed on shared database with row-level tenant isolation — a common pattern for SaaS applications at early and mid-stage growth. But the implementation details within that pattern are where the real decisions live.
Row-Level Isolation in Practice
Every table in BastionGlass that contains tenant-specific data includes a tenantId column. This is a UUID foreign key to the tenants table, and it participates in every query that touches tenant data. The pattern is enforced at the ORM layer through Prisma middleware that automatically injects tenant scoping into queries.
When a user authenticates, their JWT includes the tenant ID. A middleware function on every API route extracts this ID and attaches it to the request context. The Prisma client instance for that request is then wrapped with a middleware that appends WHERE tenantId = ? to every read query and sets tenantId on every write operation. The application code never manually specifies the tenant — it is handled transparently.
This approach has a significant advantage: developers writing feature code cannot accidentally forget to scope by tenant. The isolation is structural, not voluntary. But it also has a risk — if the middleware fails or is bypassed, there is no secondary barrier. To address this, we added PostgreSQL Row-Level Security policies as a defense-in-depth measure.
RLS policies in PostgreSQL operate at the database engine level, below the ORM. Even if a raw SQL query somehow bypasses Prisma's middleware, the database itself will filter results based on a session variable that we set on each connection. This two-layer approach means a failure in either layer still leaves the other one protecting tenant data. Both layers would need to fail simultaneously for a cross-tenant data leak, which significantly reduces the attack surface.
Shared Resources and Tenant-Specific Configuration
Not everything in BastionGlass is tenant-specific. Vehicle databases, glass part catalogs, and insurance provider directories are shared resources that all tenants access. These reference tables have no tenantId column and are readable by all tenants but writable only by system administrators.
The interesting case is tenant-specific configuration layered over shared resources. For example, the glass parts catalog contains industry-standard part numbers and base pricing. But each tenant may have different supplier agreements, different markup percentages, and different preferred brands. BastionGlass handles this with a configuration overlay pattern — a tenant-specific pricing table that references the shared catalog and allows overrides without duplicating the underlying data.
This means adding a new part to the catalog makes it available to all tenants immediately, but each tenant's pricing, preferred suppliers, and stocking preferences remain independent. The quoting engine reads from both layers, merging tenant-specific overrides with shared defaults to produce accurate quotes for each shop.
Tenant-level feature flags control which modules are available. Not every auto glass shop needs insurance claim management or multi-technician dispatch. Rather than building separate product tiers with different codebases, we use feature flags that enable or disable modules per tenant. The code is always deployed — the flag controls whether the UI renders the feature and whether the API accepts requests for it.
Performance Considerations at Scale
Shared-database multi-tenancy introduces performance concerns that do not exist in isolated deployments. The most obvious is query performance — as the tenant count grows, so does the total data volume in each table, and every query pays the cost of filtering by tenant ID.
We mitigated this primarily through database indexing. Every table with a tenantId column has a composite index that includes tenantId as the leading column. This ensures that tenant-scoped queries use an index scan rather than a table scan, keeping query performance proportional to the individual tenant's data volume rather than the total system volume.
Connection pooling is another concern. Each API request needs a database connection configured with the correct RLS session variable. We use a connection pool with per-request session configuration — connections are borrowed from the pool, configured with the tenant context, used for the request, then reset and returned. This avoids the overhead of per-tenant connection pools while maintaining security.
The pattern that keeps me watchful is write contention. Shared tables like the job queue can experience lock contention when many tenants are creating and updating jobs simultaneously. PostgreSQL handles this well at moderate scale, but there is a threshold beyond which we would need to partition the most active tables by tenant or move to a schema-per-tenant model for high-volume shops.
The Hybrid Future
The current architecture works well for shops processing dozens of jobs per day. But the multi-tenant database design will need to evolve as we onboard larger operations — multi-location shops processing hundreds of jobs daily across multiple cities.
The plan is a hybrid model: shared infrastructure for the majority of tenants, with the option to provision dedicated database schemas for tenants that need higher isolation or performance guarantees. The application layer is already designed for this — the tenant configuration record can specify a database connection string, allowing per-tenant routing at the ORM level. We have not needed to exercise this capability yet, but having the escape hatch designed into the system means we can scale the architecture without rewriting it.