Serverless environments can exhaust database connections quickly. Connection pooling solves this by reusing connections across requests.
The Problem
Each serverless function invocation may create a new database connection. With cold starts and concurrent requests, you can quickly hit PostgreSQL's default 100 connection limit.
Solution 1: Neon Serverless Driver
Neon provides built-in connection pooling with a serverless-compatible driver.
pnpm add @neondatabase/serverless drizzle-orm
// db/index.ts
import { neon } from "@neondatabase/serverless";
import { drizzle } from "drizzle-orm/neon-http";
import * as schema from "./schema";
const sql = neon(process.env.DATABASE_URL!);
export const db = drizzle(sql, { schema });
For WebSocket connections (better for transactions):
// db/index.ts
import { Pool } from "@neondatabase/serverless";
import { drizzle } from "drizzle-orm/neon-serverless";
import * as schema from "./schema";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
export const db = drizzle(pool, { schema });
Environment Setup
Neon provides two connection strings:
# Direct connection (for migrations)
DATABASE_URL=postgresql://user:pass@ep-xxx.us-east-1.aws.neon.tech/neondb
# Pooled connection (for app queries)
DATABASE_URL_POOLED=postgresql://user:pass@ep-xxx-pooler.us-east-1.aws.neon.tech/neondb
Solution 2: Supabase with Connection Pooling
Supabase uses PgBouncer for built-in pooling.
// db/index.ts
import { drizzle } from "drizzle-orm/postgres-js";
import postgres from "postgres";
import * as schema from "./schema";
// Use the pooled connection string from Supabase
const connectionString = process.env.DATABASE_URL!;
// Configure connection limits
const client = postgres(connectionString, {
max: 1, // Single connection per serverless instance
idle_timeout: 20,
connect_timeout: 10,
});
export const db = drizzle(client, { schema });
Solution 3: Self-Hosted PgBouncer
For custom PostgreSQL setups, add PgBouncer as a connection pooler.
Docker Compose
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
volumes:
- pgdata:/var/lib/postgresql/data
pgbouncer:
image: edoburu/pgbouncer
environment:
DATABASE_URL: postgresql://myuser:mypassword@postgres:5432/myapp
POOL_MODE: transaction
MAX_CLIENT_CONN: 1000
DEFAULT_POOL_SIZE: 20
MIN_POOL_SIZE: 5
ports:
- "6432:5432"
depends_on:
- postgres
volumes:
pgdata:
Connect Through PgBouncer
# Point your app at PgBouncer instead of PostgreSQL directly
DATABASE_URL=postgresql://myuser:mypassword@localhost:6432/myapp
Solution 4: Singleton Pattern for Node.js Runtime
For the Node.js runtime (not edge), reuse a single pool.
// db/index.ts
import { drizzle } from "drizzle-orm/node-postgres";
import { Pool } from "pg";
import * as schema from "./schema";
function createPool() {
return new Pool({
connectionString: process.env.DATABASE_URL,
max: 10, // Max connections in pool
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
});
}
// Global singleton for development hot reloading
const globalForDb = globalThis as unknown as {
pool: Pool | undefined;
};
const pool = globalForDb.pool ?? createPool();
if (process.env.NODE_ENV !== "production") {
globalForDb.pool = pool;
}
export const db = drizzle(pool, { schema });
// Graceful shutdown
process.on("SIGTERM", () => {
pool.end().catch(console.error);
});
Solution 5: Separate Pools for Different Workloads
// db/pools.ts
import { Pool } from "pg";
import { drizzle } from "drizzle-orm/node-postgres";
import * as schema from "./schema";
// Fast pool for read queries
const readPool = new Pool({
connectionString: process.env.DATABASE_READ_URL ?? process.env.DATABASE_URL,
max: 15,
idleTimeoutMillis: 20000,
});
// Write pool with fewer connections
const writePool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 5,
idleTimeoutMillis: 30000,
});
export const readDb = drizzle(readPool, { schema });
export const writeDb = drizzle(writePool, { schema });
Usage:
// Read operations
const users = await readDb.select().from(schema.users);
// Write operations
await writeDb.insert(schema.users).values({ name: "Alice", email: "alice@example.com" });
Monitoring Connection Usage
// lib/db-health.ts
import { Pool } from "pg";
export function getPoolStats(pool: Pool) {
return {
totalCount: pool.totalCount,
idleCount: pool.idleCount,
waitingCount: pool.waitingCount,
};
}
// API route for health checks
// app/api/health/route.ts
import { NextResponse } from "next/server";
import { pool } from "@/db";
import { getPoolStats } from "@/lib/db-health";
export async function GET() {
try {
// Quick connection test
const result = await pool.query("SELECT 1");
const stats = getPoolStats(pool);
return NextResponse.json({
status: "healthy",
database: {
connected: true,
...stats,
},
});
} catch (error) {
return NextResponse.json(
{
status: "unhealthy",
database: {
connected: false,
error: error instanceof Error ? error.message : "Unknown error",
},
},
{ status: 503 }
);
}
}
Best Practices
| Setting | Serverless | Long-Running |
|---|---|---|
max connections | 1-3 | 10-20 |
idleTimeoutMillis | 10000 | 30000 |
| Pool mode | Transaction | Session |
| Connection string | Pooled URL | Direct URL |
- Always use pooled connections in serverless environments
- Use direct connections only for migrations and schema changes
- Set
max: 1per serverless function to prevent orphaned connections - Monitor pool statistics to size your pool correctly
- Use separate read/write pools when using read replicas
Need Production Database Architecture?
We design and implement database architectures with proper pooling, replication, and performance optimization. Contact us to discuss your infrastructure.