Skip to main content
Engineering7 min readMarch 3, 2026

Database Connection Pooling: Why It Matters and How to Configure It

A practical guide to database connection pooling — how pools work, right-sizing for your workload, configuring Prisma and PgBouncer, and fixing the most common pool problems.

James Ross Jr.

James Ross Jr.

Strategic Systems Architect & Enterprise Software Developer

Database connections are expensive. Each PostgreSQL connection consumes roughly 5-10MB of server memory and requires a dedicated process on the database server. When you have 20 connections, that is manageable. When you have 200, you are running into the database server's resource limits. When you have 2,000 — which happens quickly with serverless or container deployments — you are actively degrading database performance.

Connection pooling is how you solve this.

What a Connection Pool Does

A connection pool maintains a set of open database connections that your application shares. Instead of opening a new connection for every request (expensive) and closing it when done, your application borrows a connection from the pool, uses it, and returns it. The pool keeps connections warm and ready.

The key insight: most web application requests touch the database for a few milliseconds out of a request lifecycle that might take 50-200ms. During that idle time, the connection could serve other requests. A pool of 20 connections can serve hundreds of concurrent requests efficiently.

Connection Pool Architecture Options

There are three places where connection pooling can happen:

Application-level pooling (Prisma, pg connection pool, Sequelize): The pool lives in your application process. Simple to configure, no additional infrastructure.

External process pooling (PgBouncer): A separate proxy process manages connections. Multiple application instances share one pool. Essential for container/serverless environments.

Database-native pooling (Supabase Supavisor, Neon, AWS RDS Proxy): Managed pooling as a service. The easiest operational story but adds latency and may not support all PostgreSQL features.

Application-Level Pooling With Prisma

Prisma includes a built-in connection pool:

const prisma = new PrismaClient({
  datasources: {
    db: {
      url: process.env.DATABASE_URL,
    },
  },
})

Configure the pool size in the connection URL:

DATABASE_URL="postgresql://user:password@host:5432/db?connection_limit=10&pool_timeout=30"

Or in the URL's connection pool parameters:

  • connection_limit: Maximum number of connections in the pool (default: num_cpus * 2 + 1)
  • pool_timeout: How long to wait for a connection before throwing (seconds)
  • connect_timeout: How long to wait for the connection to be established

For most applications on traditional servers, the default pool size formula is reasonable. But Prisma runs in your Node.js process — each application instance has its own pool. On a server with 4 application instances, a connection_limit=10 means 40 total connections to the database.

The Serverless Problem

Serverless functions and containers change the math dramatically. A Lambda function or Cloudflare Worker might scale from 0 to 500 instances in seconds. If each instance creates a connection pool, you have 500+ connections hitting the database simultaneously — even for modest traffic.

PostgreSQL's default max_connections is 100. Most managed PostgreSQL services allow 200-1000 depending on the plan. 500 application instances with connection pools of 10 each means 5,000 connection attempts to a database that allows 200.

The result: connection errors, failed requests, and degraded performance across the board.

The solution for serverless is external pooling, or using a serverless-compatible database connection approach:

// For serverless: use pgBouncer or Prisma Accelerate
// Set connection_limit=1 since each invocation is short-lived
const DATABASE_URL = `${process.env.DATABASE_URL}?connection_limit=1`

// Or use Prisma Accelerate (managed pooling service)
const prisma = new PrismaClient({
  datasources: {
    db: {
      url: process.env.ACCELERATE_URL, // Prisma's managed pooler
    },
  },
})

Configuring PgBouncer

PgBouncer is the battle-tested external pooler for PostgreSQL. It sits between your application and PostgreSQL, maintaining a small pool of real database connections that it multiplexes across many application connections.

Three pooling modes:

Session pooling: A database connection is assigned for the duration of the client session. Least efficient — basically 1:1 mapping.

Transaction pooling: A database connection is held only for the duration of a transaction. The most efficient mode, compatible with most applications.

Statement pooling: A connection is returned after each statement. The most efficient but incompatible with multi-statement transactions.

Transaction pooling is the right choice for most web applications:

# pgbouncer.ini
[databases]
myapp = host=db.example.com port=5432 dbname=myapp

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt

pool_mode = transaction

# Maximum connections PgBouncer will maintain to PostgreSQL
max_client_conn = 1000   # Application connections to PgBouncer
default_pool_size = 25   # Real PostgreSQL connections per database
min_pool_size = 5        # Minimum connections kept open
max_db_connections = 50  # Total connections to PostgreSQL

# Timeout settings
query_timeout = 30
query_wait_timeout = 120
client_idle_timeout = 600
server_idle_timeout = 600

With this configuration, 1000 application connections share 25 real database connections. The efficiency comes from the fact that application connections are only using the database for a small percentage of the time.

PgBouncer caveats with transaction pooling:

Prepared statements do not work reliably in transaction pooling mode (the statement might be prepared on connection A but executed on connection B). This affects:

  • Prisma with prepared_statement_mode=1 (disable with prepared_statements=false in connection URL)
  • Some PostgreSQL features that use session-level state
DATABASE_URL="postgresql://user:pass@pgbouncer:6432/myapp?prepared_statements=false"

Right-Sizing Your Pool

Too few connections: requests queue waiting for a connection, increasing latency. Too many connections: the database degrades under the overhead of managing many connections.

The formula I start with:

max_connections = num_cores * 4

For a 4-core PostgreSQL server: 16 connections. This is a starting point, not a ceiling. Use pg_stat_activity to monitor actual connection usage:

-- Show current connections and what they are doing
SELECT
  state,
  wait_event_type,
  wait_event,
  COUNT(*) as count,
  MAX(EXTRACT(EPOCH FROM (NOW() - state_change))) AS max_duration_seconds
FROM pg_stat_activity
WHERE datname = 'your_database'
GROUP BY state, wait_event_type, wait_event
ORDER BY count DESC;

Healthy production output: most connections are idle (waiting in the pool), a small number are active (executing queries). If you see many connections in waiting on lock or idle in transaction, you have other problems to investigate.

Monitoring Pool Health

Track these metrics in your observability system:

Pool utilization: What percentage of pool connections are in use? Above 80% consistently indicates you need more connections or better query efficiency.

Wait time: How long do requests wait for a pool connection? Should be near zero. Spikes indicate the pool is undersized for your traffic.

Connection errors: Failed connection attempts indicate the pool is exhausted.

// With Prisma, you can track pool metrics through events
const prisma = new PrismaClient({
  log: ['query', 'warn', 'error'],
})

prisma.$on('query', (e) => {
  if (e.duration > 1000) {
    console.warn(`Slow query (${e.duration}ms):`, e.query)
  }
})

Connection pooling is infrastructure you configure once and rarely think about — until it breaks. Configuring it correctly from the start prevents the class of production incidents where the database is fine but the application cannot reach it because connections are exhausted.


Dealing with connection pooling issues or scaling a Node.js application to handle more concurrent database connections? Book a call and let's solve it: calendly.com/jamesrossjr.


Keep Reading