Loading content...
Understanding the DAO pattern conceptually is valuable, but implementation is where theory meets the messy reality of databases. How do you manage database connections efficiently? How do you build dynamic queries safely? How do you map result sets to domain objects cleanly? How do you handle the myriad failure modes that databases throw at you?
These questions have non-obvious answers. A poorly implemented DAO can leak connections, invite SQL injection attacks, create mapping bugs, or produce cryptic error messages. A well-implemented DAO becomes invisible—it just works, reliably, at scale.
This page provides comprehensive implementation guidance for building production-quality DAOs. You'll learn connection management patterns, safe query building techniques, result mapping strategies, transaction handling approaches, and error management best practices.
Database connections are expensive resources. Opening a connection involves network handshakes, authentication, protocol negotiation, and memory allocation on both client and server. A naive approach—creating a new connection for each operation—cripples performance and exhausts database limits.
Connection Pooling: The Essential Pattern
Connection pooling maintains a cache of database connections that are reused across operations. Instead of creating connections on demand, DAOs borrow connections from the pool, use them, and return them promptly.
| Strategy | Latency | Resource Usage | Scalability | Recommendation |
|---|---|---|---|---|
| New Connection Per Query | High (50-200ms) | Wasteful | Poor | Never in production |
| Single Shared Connection | Low | Minimal | Blocking | Single-threaded only |
| Connection Pool | Low (reuse) | Optimal | Excellent | Production standard |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596
import { Pool, PoolConfig } from 'pg'; // ─────────────────────────────────────────────────────────// CONNECTION POOL CONFIGURATION// ───────────────────────────────────────────────────────── interface DatabaseConfig { host: string; port: number; database: string; user: string; password: string; // Pool-specific configuration minConnections: number; // Minimum idle connections to maintain maxConnections: number; // Maximum connections allowed idleTimeoutMs: number; // Close idle connections after this time connectionTimeoutMs: number; // Timeout for acquiring a connection} function createConnectionPool(config: DatabaseConfig): Pool { const poolConfig: PoolConfig = { host: config.host, port: config.port, database: config.database, user: config.user, password: config.password, // Pool sizing min: config.minConnections, // Keep at least this many connections max: config.maxConnections, // Never exceed this many // Timeout settings idleTimeoutMillis: config.idleTimeoutMs, // Close idle connections connectionTimeoutMillis: config.connectionTimeoutMs, // Fail if can't get connection // Statement timeout (prevent runaway queries) statement_timeout: 30000, // 30 second max query time }; const pool = new Pool(poolConfig); // Connection lifecycle hooks for monitoring pool.on('connect', (client) => { console.log('[Pool] New connection established'); }); pool.on('acquire', (client) => { console.log('[Pool] Connection acquired from pool'); }); pool.on('release', (client) => { console.log('[Pool] Connection returned to pool'); }); pool.on('error', (err, client) => { console.error('[Pool] Unexpected error on idle client', err); // The pool will handle removing this client }); return pool;} // ─────────────────────────────────────────────────────────// POOL SIZING GUIDELINES// ───────────────────────────────────────────────────────── /** * Recommended pool size formula: * * connections = (cpu_cores * 2) + spindle_count * * For SSDs: connections ≈ cpu_cores * 2 * * But also consider: * - Number of application instances * - Database max_connections setting * - Query latency profile * * Example: 4-core app server with SSD database * - Single instance: min=2, max=8 * - 4 instances: min=1, max=2 (per instance) */ const productionConfig: DatabaseConfig = { host: process.env.DB_HOST!, port: parseInt(process.env.DB_PORT!), database: process.env.DB_NAME!, user: process.env.DB_USER!, password: process.env.DB_PASSWORD!, minConnections: 2, // Keep connections warm maxConnections: 10, // Limit resource usage idleTimeoutMs: 30000, // 30 seconds before closing idle connectionTimeoutMs: 3000, // 3 seconds to acquire or fail};DAO Connection Usage Pattern:
DAOs should never hold connections. They acquire a connection, execute the operation, and release immediately. The Pool.query() method in most libraries handles this automatically.
1234567891011121314151617181920212223242526272829303132333435363738
export class PostgresUserDAO implements UserDAO { constructor(private readonly pool: Pool) {} // ✅ CORRECT: Using pool.query() - automatic acquire/release async findById(id: string): Promise<User | null> { const result = await this.pool.query( 'SELECT * FROM users WHERE id = $1', [id] ); return result.rows[0] ? this.mapToUser(result.rows[0]) : null; } // ❌ WRONG: Manual connection management without proper release async findByIdBad(id: string): Promise<User | null> { const client = await this.pool.connect(); // If an error occurs below, the connection LEAKS! const result = await client.query( 'SELECT * FROM users WHERE id = $1', [id] ); client.release(); // Won't execute on error return result.rows[0] ? this.mapToUser(result.rows[0]) : null; } // ✅ CORRECT: Manual connection with proper error handling async findByIdWithManualConnection(id: string): Promise<User | null> { const client = await this.pool.connect(); try { const result = await client.query( 'SELECT * FROM users WHERE id = $1', [id] ); return result.rows[0] ? this.mapToUser(result.rows[0]) : null; } finally { client.release(); // Always release, even on error } }}Connection leaks are among the most insidious bugs. The application works initially but degrades over time as connections are exhausted. Always use try/finally or automatic release patterns. Monitor pool metrics (acquires, releases, waiting) in production.
SQL injection remains one of the most dangerous and prevalent vulnerabilities. DAOs are the last line of defense—they must construct queries safely, regardless of what input they receive.
The Cardinal Rule: Never Concatenate User Input Into SQL
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
// ═══════════════════════════════════════════════════════════// SQL INJECTION VULNERABILITY EXAMPLES// ═══════════════════════════════════════════════════════════ // ❌ CATASTROPHICALLY WRONG: String concatenationasync function findByEmailBAD(email: string) { const sql = `SELECT * FROM users WHERE email = '${email}'`; // If email = "'; DROP TABLE users; --" // SQL becomes: SELECT * FROM users WHERE email = ''; DROP TABLE users; --' return pool.query(sql);} // ❌ ALSO WRONG: Even with sanitization, this is unsafeasync function findByEmailAlsoBAD(email: string) { const sanitized = email.replace(/'/g, "''"); // Incomplete! const sql = `SELECT * FROM users WHERE email = '${sanitized}'`; return pool.query(sql);} // ═══════════════════════════════════════════════════════════// CORRECT: PARAMETERIZED QUERIES// ═══════════════════════════════════════════════════════════ // ✅ CORRECT: Parameterized query (PostgreSQL style)async function findByEmailSafe(email: string) { return pool.query( 'SELECT * FROM users WHERE email = $1', // $1 is a placeholder [email] // Values passed separately );} // ✅ CORRECT: Multiple parametersasync function findByStatusAndTier(status: string, tier: string) { return pool.query( 'SELECT * FROM users WHERE status = $1 AND tier = $2', [status, tier] );} // ✅ CORRECT: MySQL style (? placeholders)async function findByEmailMySQL(email: string) { return connection.query( 'SELECT * FROM users WHERE email = ?', [email] );}Building Dynamic Queries Safely:
Real applications often need dynamic queries—WHERE clauses that vary based on search criteria. Building these safely requires careful construction:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158
interface UserSearchCriteria { email?: string; status?: UserStatus; tier?: SubscriptionTier; createdAfter?: Date; createdBefore?: Date; nameContains?: string;} export class PostgresUserDAO implements UserDAO { /** * Builds a dynamic WHERE clause from search criteria. * All conditions use parameterized queries for safety. */ async findByCriteria(criteria: UserSearchCriteria): Promise<User[]> { const conditions: string[] = []; const values: any[] = []; let paramIndex = 1; // Each condition is added with a parameterized placeholder if (criteria.email) { conditions.push(`email ILIKE $${paramIndex++}`); values.push(`%${criteria.email}%`); } if (criteria.status) { conditions.push(`status = $${paramIndex++}`); values.push(criteria.status); } if (criteria.tier) { conditions.push(`tier = $${paramIndex++}`); values.push(criteria.tier); } if (criteria.createdAfter) { conditions.push(`created_at >= $${paramIndex++}`); values.push(criteria.createdAfter); } if (criteria.createdBefore) { conditions.push(`created_at <= $${paramIndex++}`); values.push(criteria.createdBefore); } if (criteria.nameContains) { conditions.push(`name ILIKE $${paramIndex++}`); values.push(`%${criteria.nameContains}%`); } // Construct the final query const whereClause = conditions.length > 0 ? `WHERE ${conditions.join(' AND ')}` : ''; const sql = ` SELECT id, email, name, status, tier, created_at, updated_at FROM users ${whereClause} ORDER BY created_at DESC `; const result = await this.pool.query(sql, values); return result.rows.map(row => this.mapToUser(row)); }} // ─────────────────────────────────────────────────────────// QUERY BUILDER PATTERN FOR COMPLEX QUERIES// ───────────────────────────────────────────────────────── class QueryBuilder { private _select: string[] = []; private _from: string = ''; private _joins: string[] = []; private _where: string[] = []; private _orderBy: string[] = []; private _limit: number | null = null; private _offset: number | null = null; private _params: any[] = []; private _paramIndex = 1; select(...columns: string[]): this { this._select.push(...columns); return this; } from(table: string): this { this._from = table; return this; } join(join: string): this { this._joins.push(join); return this; } where(condition: string, value: any): this { this._where.push(condition.replace('?', `$${this._paramIndex++}`)); this._params.push(value); return this; } whereIf(condition: boolean, clause: string, value: any): this { if (condition) { this.where(clause, value); } return this; } orderBy(column: string, direction: 'ASC' | 'DESC' = 'ASC'): this { this._orderBy.push(`${column} ${direction}`); return this; } limit(limit: number): this { this._limit = limit; return this; } offset(offset: number): this { this._offset = offset; return this; } build(): { sql: string; params: any[] } { const parts = [ `SELECT ${this._select.length ? this._select.join(', ') : '*'}`, `FROM ${this._from}`, ...this._joins.map(j => `JOIN ${j}`), this._where.length ? `WHERE ${this._where.join(' AND ')}` : '', this._orderBy.length ? `ORDER BY ${this._orderBy.join(', ')}` : '', this._limit !== null ? `LIMIT ${this._limit}` : '', this._offset !== null ? `OFFSET ${this._offset}` : '', ]; return { sql: parts.filter(Boolean).join(' '), params: this._params }; }} // Usageasync function searchUsers(criteria: UserSearchCriteria): Promise<User[]> { const query = new QueryBuilder() .select('id', 'email', 'name', 'status', 'tier') .from('users') .whereIf(!!criteria.email, 'email ILIKE ?', `%${criteria.email}%`) .whereIf(!!criteria.status, 'status = ?', criteria.status) .whereIf(!!criteria.tier, 'tier = ?', criteria.tier) .orderBy('created_at', 'DESC') .limit(100) .build(); const result = await pool.query(query.sql, query.params); return result.rows.map(mapToUser);}Parameterized queries protect VALUES, not identifiers (column/table names). If you must dynamically specify column names (e.g., for ORDER BY), use an allowlist. Never interpolate user input as column names.
Database result sets contain raw rows—objects with string keys and loosely typed values. Mapping these to typed domain objects is a core DAO responsibility. This mapping logic should be explicit, centralized, and bulletproof.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167
// ─────────────────────────────────────────────────────────// BASIC MAPPING: Row to Domain Object// ───────────────────────────────────────────────────────── interface UserRow { id: string; email: string; name: string | null; status: string; tier: string; created_at: Date; updated_at: Date; deleted_at: Date | null;} class UserMapper { /** * Maps a database row to a User domain object. * Handles type conversions and nullable fields. */ toDomain(row: UserRow): User { return new User({ id: new UserId(row.id), email: new Email(row.email), name: row.name ?? 'Unknown', status: this.mapStatus(row.status), tier: this.mapTier(row.tier), createdAt: row.created_at, updatedAt: row.updated_at, isDeleted: row.deleted_at !== null, }); } /** * Maps a User domain object to a database row. * Used for INSERT and UPDATE operations. */ toRow(user: User): UserRow { return { id: user.id.value, email: user.email.value, name: user.name, status: user.status, tier: user.tier, created_at: user.createdAt, updated_at: new Date(), deleted_at: user.isDeleted ? new Date() : null, }; } /** * Maps status string from database to enum. * Provides defensive handling of unexpected values. */ private mapStatus(status: string): UserStatus { const mapping: Record<string, UserStatus> = { 'ACTIVE': UserStatus.ACTIVE, 'INACTIVE': UserStatus.INACTIVE, 'SUSPENDED': UserStatus.SUSPENDED, 'PENDING_VERIFICATION': UserStatus.PENDING_VERIFICATION, }; const mapped = mapping[status]; if (mapped === undefined) { console.warn(`Unknown user status: ${status}, defaulting to INACTIVE`); return UserStatus.INACTIVE; } return mapped; } private mapTier(tier: string): SubscriptionTier { const mapping: Record<string, SubscriptionTier> = { 'FREE': SubscriptionTier.FREE, 'BASIC': SubscriptionTier.BASIC, 'PREMIUM': SubscriptionTier.PREMIUM, 'ENTERPRISE': SubscriptionTier.ENTERPRISE, }; return mapping[tier] ?? SubscriptionTier.FREE; }} // ─────────────────────────────────────────────────────────// HANDLING COMPLEX MAPPINGS: Joins and Nested Objects// ───────────────────────────────────────────────────────── interface OrderWithDetailsRow { // Order columns order_id: string; order_status: string; order_total: string; // DECIMAL comes as string order_created_at: Date; // Customer columns (from JOIN) customer_id: string; customer_name: string; customer_email: string; // Line item columns (from JOIN, may have duplicates) item_id: string | null; item_product_id: string | null; item_quantity: number | null; item_unit_price: string | null;} class OrderMapper { /** * Maps joined result rows to Order aggregates. * Handles one-to-many relationships (order -> line items). */ toDomainList(rows: OrderWithDetailsRow[]): Order[] { if (rows.length === 0) return []; // Group rows by order_id const orderGroups = new Map<string, OrderWithDetailsRow[]>(); for (const row of rows) { const existing = orderGroups.get(row.order_id) || []; existing.push(row); orderGroups.set(row.order_id, existing); } // Map each group to an Order const orders: Order[] = []; for (const [orderId, orderRows] of orderGroups) { orders.push(this.mapOrderGroup(orderRows)); } return orders; } private mapOrderGroup(rows: OrderWithDetailsRow[]): Order { const firstRow = rows[0]; // Map customer (same for all rows in group) const customer = new Customer({ id: new CustomerId(firstRow.customer_id), name: firstRow.customer_name, email: new Email(firstRow.customer_email), }); // Map line items (different for each row) const lineItems = rows .filter(row => row.item_id !== null) // Handle orders with no items .map(row => new OrderLineItem({ id: new OrderLineItemId(row.item_id!), productId: new ProductId(row.item_product_id!), quantity: row.item_quantity!, unitPrice: Money.fromDecimalString(row.item_unit_price!), })); // Construct the order return new Order({ id: new OrderId(firstRow.order_id), customer, status: this.mapOrderStatus(firstRow.order_status), totalAmount: Money.fromDecimalString(firstRow.order_total), lineItems, createdAt: firstRow.order_created_at, }); } private mapOrderStatus(status: string): OrderStatus { // ... similar to UserMapper return OrderStatus[status as keyof typeof OrderStatus] ?? OrderStatus.PENDING; }}Mapping logic is highly testable—it's pure transformation from one data shape to another. Write comprehensive unit tests covering null handling, edge cases, and unknown enum values. This catches bugs before they reach production.
Database transactions ensure that multiple operations either all succeed or all fail together—the 'A' (Atomicity) in ACID. DAOs participate in transactions but typically don't control them; transaction boundaries are managed by calling services.
Transaction Patterns in DAO Architecture:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154
import { Pool, PoolClient } from 'pg'; // ─────────────────────────────────────────────────────────// TRANSACTION-AWARE DAO DESIGN// ───────────────────────────────────────────────────────── // DAOs can operate with or without external transactioninterface TransactionContext { client: PoolClient;} export class PostgresUserDAO implements UserDAO { constructor(private readonly pool: Pool) {} /** * Find user - works independently or within transaction */ async findById( id: string, txContext?: TransactionContext ): Promise<User | null> { const client = txContext?.client ?? this.pool; const result = await client.query( 'SELECT * FROM users WHERE id = $1', [id] ); return result.rows[0] ? this.mapToUser(result.rows[0]) : null; } /** * Save user - works independently or within transaction */ async save( user: User, txContext?: TransactionContext ): Promise<User> { const client = txContext?.client ?? this.pool; const result = await client.query( `INSERT INTO users (id, email, name, status, tier, created_at, updated_at) VALUES ($1, $2, $3, $4, $5, $6, $7) ON CONFLICT (id) DO UPDATE SET email = EXCLUDED.email, name = EXCLUDED.name, status = EXCLUDED.status, tier = EXCLUDED.tier, updated_at = NOW() RETURNING *`, [user.id, user.email, user.name, user.status, user.tier, user.createdAt, new Date()] ); return this.mapToUser(result.rows[0]); }} // ─────────────────────────────────────────────────────────// TRANSACTION MANAGER// ───────────────────────────────────────────────────────── export class TransactionManager { constructor(private readonly pool: Pool) {} /** * Executes a function within a transaction. * Commits on success, rolls back on error. */ async runInTransaction<T>( fn: (txContext: TransactionContext) => Promise<T> ): Promise<T> { const client = await this.pool.connect(); try { await client.query('BEGIN'); const result = await fn({ client }); await client.query('COMMIT'); return result; } catch (error) { await client.query('ROLLBACK'); throw error; } finally { client.release(); } } /** * Executes a function within a nested savepoint. * Allows partial rollback within a larger transaction. */ async runInSavepoint<T>( txContext: TransactionContext, savepointName: string, fn: () => Promise<T> ): Promise<T> { try { await txContext.client.query(`SAVEPOINT ${savepointName}`); const result = await fn(); await txContext.client.query(`RELEASE SAVEPOINT ${savepointName}`); return result; } catch (error) { await txContext.client.query(`ROLLBACK TO SAVEPOINT ${savepointName}`); throw error; } }} // ─────────────────────────────────────────────────────────// SERVICE LAYER: TRANSACTION ORCHESTRATION// ───────────────────────────────────────────────────────── export class OrderService { constructor( private readonly transactionManager: TransactionManager, private readonly orderDAO: OrderDAO, private readonly inventoryDAO: InventoryDAO, private readonly paymentDAO: PaymentDAO ) {} async createOrder(dto: CreateOrderDTO): Promise<Order> { // Transaction spans the entire use case return this.transactionManager.runInTransaction(async (txContext) => { // All DAO operations use the same transaction // 1. Reserve inventory for (const item of dto.items) { await this.inventoryDAO.decrementStock( item.productId, item.quantity, txContext ); } // 2. Create order const order = Order.create(dto); await this.orderDAO.save(order, txContext); // 3. Create payment record const payment = Payment.pending(order.id, order.totalAmount); await this.paymentDAO.save(payment, txContext); // If any step fails, everything rolls back return order; }); }}Keep transactions as short as possible. Long-running transactions hold locks, reducing concurrency and risking deadlocks. Never wait for external resources (HTTP calls, user input) inside a transaction.
Database operations fail in predictable ways. The DAO layer should catch these database-specific errors and translate them into domain-meaningful exceptions that the business layer can understand.
Common Database Error Categories:
| Error Type | Database Example | Domain Translation | Handling Strategy |
|---|---|---|---|
| Constraint Violation | unique_violation (23505) | DuplicateEntityException | Report to caller for domain handling |
| Foreign Key Violation | foreign_key_violation (23503) | ReferentialIntegrityException | Report to caller; indicates data issue |
| Connection Error | Connection refused | PersistenceUnavailableException | Retry with backoff; alert ops |
| Timeout | Query cancelled | QueryTimeoutException | Log, possibly retry simpler query |
| Deadlock | deadlock_detected (40P01) | ConcurrencyConflictException | Automatic retry |
| Data Truncation | string_data_right_truncation (22001) | DataValidationException | Fix input data |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197
// ─────────────────────────────────────────────────────────// DOMAIN EXCEPTIONS (Defined in domain/errors)// ───────────────────────────────────────────────────────── export class PersistenceException extends Error { constructor(message: string, public readonly cause?: Error) { super(message); this.name = 'PersistenceException'; }} export class EntityNotFoundException extends PersistenceException { constructor( public readonly entityType: string, public readonly id: string ) { super(`${entityType} with id '${id}' not found`); this.name = 'EntityNotFoundException'; }} export class DuplicateEntityException extends PersistenceException { constructor( public readonly entityType: string, public readonly field: string, public readonly value: string ) { super(`${entityType} with ${field} '${value}' already exists`); this.name = 'DuplicateEntityException'; }} export class ConcurrencyConflictException extends PersistenceException { constructor( public readonly entityType: string, public readonly id: string ) { super(`Concurrent modification conflict for ${entityType} '${id}'`); this.name = 'ConcurrencyConflictException'; }} export class PersistenceUnavailableException extends PersistenceException { constructor(message: string, cause?: Error) { super(message, cause); this.name = 'PersistenceUnavailableException'; }} // ─────────────────────────────────────────────────────────// ERROR TRANSLATOR (Infrastructure layer)// ───────────────────────────────────────────────────────── interface PostgresError extends Error { code?: string; constraint?: string; detail?: string; table?: string; column?: string;} export class PostgresErrorTranslator { /** * Translates PostgreSQL errors into domain exceptions. */ translate(error: PostgresError, context: ErrorContext): PersistenceException { const code = error.code; switch (code) { // Unique violation case '23505': return this.handleUniqueViolation(error, context); // Foreign key violation case '23503': return new PersistenceException( `Referenced ${error.table || 'entity'} does not exist`, error ); // Not null violation case '23502': return new PersistenceException( `Required field '${error.column}' is missing`, error ); // Deadlock detected case '40P01': return new ConcurrencyConflictException( context.entityType, context.entityId || 'unknown' ); // Connection errors case 'ECONNREFUSED': case 'ENOTFOUND': case 'ETIMEDOUT': return new PersistenceUnavailableException( 'Database is temporarily unavailable', error ); default: // Log unknown errors for investigation console.error('Unhandled database error:', error); return new PersistenceException( 'An unexpected database error occurred', error ); } } private handleUniqueViolation( error: PostgresError, context: ErrorContext ): DuplicateEntityException { // Extract field name from constraint or detail const field = this.extractFieldFromConstraint( error.constraint, error.detail ); return new DuplicateEntityException( context.entityType, field, this.extractValueFromDetail(error.detail) || 'unknown' ); } private extractFieldFromConstraint( constraint?: string, detail?: string ): string { // Example constraint: "users_email_key" if (constraint && constraint.includes('_')) { const parts = constraint.split('_'); // Remove table name and key suffix return parts.slice(1, -1).join('_'); } // Example detail: "Key (email)=(test@example.com) already exists." if (detail) { const match = detail.match(/Key \((\w+)\)/); if (match) return match[1]; } return 'unknown field'; } private extractValueFromDetail(detail?: string): string | null { if (!detail) return null; const match = detail.match(/=\(([^)]+)\)/); return match ? match[1] : null; }} // ─────────────────────────────────────────────────────────// DAO WITH ERROR TRANSLATION// ───────────────────────────────────────────────────────── export class PostgresUserDAO implements UserDAO { constructor( private readonly pool: Pool, private readonly errorTranslator: PostgresErrorTranslator ) {} async save(user: User): Promise<User> { try { const result = await this.pool.query( `INSERT INTO users (id, email, name, status, tier, created_at) VALUES ($1, $2, $3, $4, $5, $6) RETURNING *`, [user.id, user.email, user.name, user.status, user.tier, user.createdAt] ); return this.mapToUser(result.rows[0]); } catch (error) { throw this.errorTranslator.translate(error as PostgresError, { entityType: 'User', entityId: user.id, operation: 'save' }); } } async findByIdOrThrow(id: string): Promise<User> { const user = await this.findById(id); if (!user) { throw new EntityNotFoundException('User', id); } return user; }}Always preserve the original database error as the 'cause' of your domain exception. This enables effective debugging while presenting clean error messages to callers. The original error should be logged at the DAO layer.
DAO testing requires a different approach than typical unit testing because DAOs inherently interact with external systems. The testing strategy combines unit tests for mapping logic with integration tests for database operations.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186
// ═══════════════════════════════════════════════════════════// UNIT TESTS: Mapper Logic (No database needed)// ═══════════════════════════════════════════════════════════ describe('UserMapper', () => { const mapper = new UserMapper(); describe('toDomain', () => { it('should map required fields correctly', () => { const row: UserRow = { id: 'user-123', email: 'test@example.com', name: 'Test User', status: 'ACTIVE', tier: 'PREMIUM', created_at: new Date('2024-01-01'), updated_at: new Date('2024-01-02'), deleted_at: null, }; const user = mapper.toDomain(row); expect(user.id.value).toBe('user-123'); expect(user.email.value).toBe('test@example.com'); expect(user.name).toBe('Test User'); expect(user.status).toBe(UserStatus.ACTIVE); expect(user.tier).toBe(SubscriptionTier.PREMIUM); expect(user.isDeleted).toBe(false); }); it('should handle null name gracefully', () => { const row: UserRow = { id: 'user-123', email: 'test@example.com', name: null, // Null name status: 'ACTIVE', tier: 'FREE', created_at: new Date(), updated_at: new Date(), deleted_at: null, }; const user = mapper.toDomain(row); expect(user.name).toBe('Unknown'); // Default value }); it('should handle unknown status gracefully', () => { const row: UserRow = { id: 'user-123', email: 'test@example.com', name: 'Test', status: 'UNKNOWN_STATUS', // Invalid status tier: 'FREE', created_at: new Date(), updated_at: new Date(), deleted_at: null, }; const user = mapper.toDomain(row); expect(user.status).toBe(UserStatus.INACTIVE); // Safe default }); it('should mark deleted users correctly', () => { const row: UserRow = { id: 'user-123', email: 'test@example.com', name: 'Test', status: 'ACTIVE', tier: 'FREE', created_at: new Date(), updated_at: new Date(), deleted_at: new Date('2024-01-15'), // Deleted }; const user = mapper.toDomain(row); expect(user.isDeleted).toBe(true); }); });}); // ═══════════════════════════════════════════════════════════// INTEGRATION TESTS: Actual Database Operations// ═══════════════════════════════════════════════════════════ describe('PostgresUserDAO (Integration)', () => { let pool: Pool; let userDAO: PostgresUserDAO; beforeAll(async () => { // Use a dedicated test database pool = new Pool({ connectionString: process.env.TEST_DATABASE_URL, }); userDAO = new PostgresUserDAO(pool, new PostgresErrorTranslator()); }); beforeEach(async () => { // Clean state before each test await pool.query('TRUNCATE TABLE users CASCADE'); }); afterAll(async () => { await pool.end(); }); describe('save', () => { it('should insert a new user', async () => { const user = User.create({ email: new Email('test@example.com'), name: 'Test User', }); const saved = await userDAO.save(user); expect(saved.id).toBeDefined(); expect(saved.email.value).toBe('test@example.com'); // Verify in database const dbResult = await pool.query( 'SELECT * FROM users WHERE id = $1', [saved.id.value] ); expect(dbResult.rows).toHaveLength(1); }); it('should update existing user', async () => { // Create initial user const user = User.create({ email: new Email('test@example.com'), name: 'Original Name', }); await userDAO.save(user); // Update name user.updateName('Updated Name'); const updated = await userDAO.save(user); expect(updated.name).toBe('Updated Name'); // Verify only one record exists const count = await pool.query('SELECT COUNT(*) FROM users'); expect(parseInt(count.rows[0].count)).toBe(1); }); it('should throw DuplicateEntityException on duplicate email', async () => { const user1 = User.create({ email: new Email('dup@example.com'), name: 'User 1', }); await userDAO.save(user1); const user2 = User.create({ email: new Email('dup@example.com'), // Same email name: 'User 2', }); await expect(userDAO.save(user2)).rejects.toThrow( DuplicateEntityException ); }); }); describe('findById', () => { it('should return user when exists', async () => { const created = await userDAO.save(User.create({ email: new Email('find@example.com'), name: 'Find Me', })); const found = await userDAO.findById(created.id.value); expect(found).not.toBeNull(); expect(found!.email.value).toBe('find@example.com'); }); it('should return null when not exists', async () => { const found = await userDAO.findById('non-existent-id'); expect(found).toBeNull(); }); });});Let's consolidate the essential implementation techniques covered:
What's Next:
The final page explores When to Use DAO—examining the contexts where DAO is the right choice, where alternatives like Repository or ORM direct access make more sense, and how to evolve your persistence strategy as systems grow.
You now possess comprehensive knowledge of DAO implementation techniques—from connection management through error handling. These skills enable you to build robust, production-quality data access layers.