Loading learning content...
Theory becomes mastery through practice. We've learned what transactions are, how to bound them, and the difference between implicit and explicit modes. Now we'll apply this knowledge to real-world scenarios you'll encounter in production systems.
Each example in this page represents a pattern you'll use repeatedly throughout your career. From the classic bank transfer to complex multi-step order processing, these examples demonstrate proper transaction design in action.
By the end of this page, you will understand how to implement classic transaction patterns for money transfers, inventory management, order processing, and user registration. You'll see examples across multiple database systems and understand the reasoning behind each design decision.
The money transfer example is the quintessential transaction scenario—it perfectly illustrates why atomicity is essential. When moving money between accounts, we must ensure that:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
-- Money Transfer: PostgreSQL Implementation-- Key features: Row-level locking, proper error handling, audit logging BEGIN; -- Step 1: Lock source account to prevent concurrent modifications-- FOR UPDATE acquires an exclusive row lockSELECT id, balance FROM accounts WHERE id = 123 FOR UPDATE; -- Step 2: Verify sufficient funds (application does this check)-- The query above returns balance; if balance < amount, we would ROLLBACK -- Step 3: Perform the transfer atomicallyUPDATE accounts SET balance = balance - 1000.00, updated_at = CURRENT_TIMESTAMP WHERE id = 123; UPDATE accounts SET balance = balance + 1000.00, updated_at = CURRENT_TIMESTAMP WHERE id = 456; -- Step 4: Create audit trailINSERT INTO transfer_log ( from_account_id, to_account_id, amount, transfer_time, transaction_id) VALUES ( 123, 456, 1000.00, CURRENT_TIMESTAMP, txid_current() -- PostgreSQL function to get current transaction ID); -- Step 5: Commit the transactionCOMMIT; -- If any step fails, the entire transaction rolls back-- No money is lost, no audit entry for failed transferWhen locking multiple rows, always acquire locks in a consistent order (e.g., by ID). If Transaction A locks account 1 then 2, and Transaction B locks account 2 then 1, they can deadlock. Consistent ordering prevents this.
Inventory operations require careful transaction design to prevent overselling and ensure accurate stock counts. The challenge intensifies during high-traffic events like flash sales where many customers compete for limited stock.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
-- Reserve Inventory: Pessimistic Locking Approach-- Best when: High contention, critical accuracy required BEGIN; -- Lock the inventory rowSELECT product_id, quantity_available, quantity_reservedFROM inventory WHERE product_id = 'PROD-001' FOR UPDATE; -- Check availability (done in application code)-- IF quantity_available - quantity_reserved < requested_quantity THEN ROLLBACK -- Reserve the inventoryUPDATE inventory SET quantity_reserved = quantity_reserved + 5, last_modified = CURRENT_TIMESTAMPWHERE product_id = 'PROD-001' AND quantity_available - quantity_reserved >= 5; -- Double-check in SQL -- Verify update succeeded (application checks affected rows)-- If 0 rows affected, someone else took the inventory -- Create reservation recordINSERT INTO reservations ( product_id, quantity, customer_id, expires_at, created_at) VALUES ( 'PROD-001', 5, 'CUST-123', CURRENT_TIMESTAMP + INTERVAL '15 minutes', CURRENT_TIMESTAMP); COMMIT; -- =====================================================-- Alternative: Optimistic Locking Approach-- Best when: Lower contention, can retry on conflict-- ===================================================== BEGIN; -- Read current versionSELECT product_id, quantity_available, versionFROM inventory WHERE product_id = 'PROD-001';-- Returns: quantity=100, version=42 -- Update only if version hasn't changedUPDATE inventory SET quantity_available = quantity_available - 5, version = version + 1, last_modified = CURRENT_TIMESTAMPWHERE product_id = 'PROD-001' AND version = 42; -- Must match what we read -- If 0 rows affected: concurrent modification occurred-- Application should: SELECT again, re-check availability, retry COMMIT;Pessimistic locking (FOR UPDATE) guarantees success but reduces concurrency. Optimistic locking (version check) allows more concurrency but requires retry logic. Choose based on contention: high contention = pessimistic, low contention = optimistic.
Order processing is a complex, multi-step transaction that touches multiple entities: orders, order items, inventory, payments, and customer records. Getting this right requires careful transaction design.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192
import { PrismaClient, Prisma, OrderStatus } from '@prisma/client'; const prisma = new PrismaClient(); interface OrderItem { productId: string; quantity: number; unitPrice: number;} interface CreateOrderRequest { customerId: string; items: OrderItem[]; shippingAddressId: string; paymentMethodId: string;} interface OrderResult { orderId: string; orderNumber: string; total: number;} async function createOrder(request: CreateOrderRequest): Promise<OrderResult> { // Input validation if (!request.items.length) { throw new Error('Order must contain at least one item'); } return await prisma.$transaction(async (tx) => { // Step 1: Verify customer exists and is active const customer = await tx.customer.findUnique({ where: { id: request.customerId }, include: { addresses: true }, }); if (!customer || !customer.isActive) { throw new Error('Customer not found or inactive'); } // Step 2: Verify shipping address belongs to customer const shippingAddress = customer.addresses.find( a => a.id === request.shippingAddressId ); if (!shippingAddress) { throw new Error('Invalid shipping address'); } // Step 3: Validate and lock inventory for all items const productIds = request.items.map(i => i.productId); // Use raw query for FOR UPDATE across all products const products = await tx.$queryRaw< { id: string; name: string; price: number; quantity_available: number }[] >` SELECT id, name, price, quantity_available FROM products WHERE id = ANY(${productIds}) FOR UPDATE `; // Build product lookup const productMap = new Map(products.map(p => [p.id, p])); // Validate all items let orderTotal = 0; const validatedItems: Array<OrderItem & { productName: string }> = []; for (const item of request.items) { const product = productMap.get(item.productId); if (!product) { throw new Error(`Product ${item.productId} not found`); } if (product.quantity_available < item.quantity) { throw new Error( `Insufficient stock for ${product.name}: ` + `${product.quantity_available} available, ${item.quantity} requested` ); } const lineTotal = item.unitPrice * item.quantity; orderTotal += lineTotal; validatedItems.push({ ...item, productName: product.name, }); } // Step 4: Generate order number const orderNumber = await generateOrderNumber(tx); // Step 5: Create the order record const order = await tx.order.create({ data: { orderNumber, customerId: request.customerId, shippingAddressId: request.shippingAddressId, paymentMethodId: request.paymentMethodId, status: OrderStatus.PENDING, subtotal: orderTotal, tax: orderTotal * 0.08, // Simplified tax calculation total: orderTotal * 1.08, createdAt: new Date(), }, }); // Step 6: Create order items and decrement inventory for (const item of validatedItems) { // Create order item await tx.orderItem.create({ data: { orderId: order.id, productId: item.productId, productName: item.productName, quantity: item.quantity, unitPrice: item.unitPrice, lineTotal: item.unitPrice * item.quantity, }, }); // Decrement inventory await tx.product.update({ where: { id: item.productId }, data: { quantityAvailable: { decrement: item.quantity } }, }); // Record inventory movement await tx.inventoryMovement.create({ data: { productId: item.productId, changeType: 'SALE', quantity: -item.quantity, referenceType: 'ORDER', referenceId: order.id, createdAt: new Date(), }, }); } // Step 7: Update customer stats await tx.customer.update({ where: { id: request.customerId }, data: { totalOrders: { increment: 1 }, totalSpent: { increment: order.total }, lastOrderAt: new Date(), }, }); // Step 8: Log the order creation await tx.auditLog.create({ data: { entityType: 'ORDER', entityId: order.id, action: 'CREATE', details: { orderNumber: order.orderNumber, itemCount: validatedItems.length, total: order.total, }, performedBy: request.customerId, createdAt: new Date(), }, }); return { orderId: order.id, orderNumber: order.orderNumber, total: order.total, }; }, { maxWait: 10000, timeout: 30000, isolationLevel: Prisma.TransactionIsolationLevel.Serializable, });} // Helper function to generate unique order numbersasync function generateOrderNumber(tx: Prisma.TransactionClient): Promise<string> { const today = new Date(); const prefix = `ORD-${today.getFullYear()}${String(today.getMonth() + 1).padStart(2, '0')}`; // Get next sequence value const [result] = await tx.$queryRaw<{ nextval: bigint }[]>` SELECT nextval('order_number_seq') `; return `${prefix}-${String(result.nextval).padStart(8, '0')}`;}This order transaction touches many tables and acquires multiple locks. In high-volume systems, consider breaking it into smaller transactions: 1) Reserve inventory 2) Create order 3) Update stats. Use eventual consistency where possible.
User registration seems simple but requires transaction protection to ensure consistent state: the user record, default preferences, initial permissions, and welcome resources must all be created atomically.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899
-- User Registration Transaction-- Creates user with all related records atomically BEGIN; -- Step 1: Check for existing email (with lock to prevent race condition)LOCK TABLE users IN ROW EXCLUSIVE MODE; -- Verify email not takenSELECT id FROM users WHERE email = 'new.user@example.com';-- Application checks: if rows returned, abort with "Email already exists" -- Step 2: Create user recordINSERT INTO users ( id, email, password_hash, first_name, last_name, status, email_verified, created_at) VALUES ( gen_random_uuid(), 'new.user@example.com', '$2b$12$hash...', -- bcrypt hash 'John', 'Doe', 'ACTIVE', FALSE, CURRENT_TIMESTAMP)RETURNING id INTO user_id; -- Capture generated ID -- Step 3: Create default preferencesINSERT INTO user_preferences ( user_id, theme, language, timezone, email_notifications, created_at) VALUES ( user_id, 'SYSTEM', 'en-US', 'UTC', TRUE, CURRENT_TIMESTAMP); -- Step 4: Assign default roleINSERT INTO user_roles ( user_id, role_id, granted_at, granted_by) SELECT user_id, id, CURRENT_TIMESTAMP, 'SYSTEM'FROM roles WHERE name = 'MEMBER'; -- Step 5: Create email verification tokenINSERT INTO email_verifications ( user_id, token, expires_at, created_at) VALUES ( user_id, encode(gen_random_bytes(32), 'hex'), CURRENT_TIMESTAMP + INTERVAL '24 hours', CURRENT_TIMESTAMP); -- Step 6: Record the registration eventINSERT INTO audit_log ( event_type, entity_type, entity_id, ip_address, user_agent, created_at) VALUES ( 'USER_REGISTERED', 'USER', user_id, '192.168.1.1', -- From application context 'Mozilla/5.0...', CURRENT_TIMESTAMP); COMMIT; -- If any step fails, no partial user is createdNotice that sending the verification email happens AFTER the transaction commits. If email sending were inside the transaction and failed, we'd rollback a successful registration. Keep external calls (email, webhooks, third-party APIs) outside transactions.
Batch processing large datasets requires different transaction strategies. One giant transaction risks timeout and long lock hold times. Committing every row adds excessive overhead. The solution: process in batches with checkpoint commits.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142
import { PrismaClient } from '@prisma/client'; const prisma = new PrismaClient(); interface BatchProcessResult { processedCount: number; errorCount: number; lastProcessedId: string | null;} // Process large dataset in batches with checkpoint commitsasync function processBatchUpdates( batchSize: number = 1000, lastProcessedId: string | null = null): Promise<BatchProcessResult> { let processedCount = 0; let errorCount = 0; let currentLastId = lastProcessedId; console.log(`Starting batch processing from ID: ${lastProcessedId || 'beginning'}`); while (true) { // Fetch next batch const records = await prisma.legacyRecord.findMany({ where: currentLastId ? { id: { gt: currentLastId }, migrated: false, } : { migrated: false, }, orderBy: { id: 'asc' }, take: batchSize, }); if (records.length === 0) { console.log('No more records to process'); break; } console.log(`Processing batch of ${records.length} records`); // Process batch in a single transaction const batchResult = await prisma.$transaction(async (tx) => { let batchProcessed = 0; let batchErrors = 0; let batchLastId: string | null = null; for (const record of records) { try { // Transform and migrate the record const transformed = transformRecord(record); await tx.newRecord.create({ data: transformed, }); // Mark as migrated await tx.legacyRecord.update({ where: { id: record.id }, data: { migrated: true, migratedAt: new Date() }, }); batchProcessed++; batchLastId = record.id; } catch (error) { // Log error but continue with batch console.error(`Error processing record ${record.id}:`, error); batchErrors++; // Mark as error for later review await tx.legacyRecord.update({ where: { id: record.id }, data: { migrationError: error.message, migrationAttempts: { increment: 1 }, }, }); batchLastId = record.id; } } // Record checkpoint await tx.migrationCheckpoint.upsert({ where: { migrationName: 'legacy-to-new' }, create: { migrationName: 'legacy-to-new', lastProcessedId: batchLastId, processedCount: batchProcessed, errorCount: batchErrors, updatedAt: new Date(), }, update: { lastProcessedId: batchLastId, processedCount: { increment: batchProcessed }, errorCount: { increment: batchErrors }, updatedAt: new Date(), }, }); return { batchProcessed, batchErrors, batchLastId }; }); processedCount += batchResult.batchProcessed; errorCount += batchResult.batchErrors; currentLastId = batchResult.batchLastId; console.log( `Batch complete. Total: ${processedCount} processed, ${errorCount} errors` ); } return { processedCount, errorCount, lastProcessedId: currentLastId, };} function transformRecord(record: any): any { // Transform logic here return { legacyId: record.id, name: record.name?.trim() || 'Unknown', email: record.email?.toLowerCase(), // ... other transformations };} // Resume from last checkpoint after failureasync function resumeBatchProcessing() { const checkpoint = await prisma.migrationCheckpoint.findUnique({ where: { migrationName: 'legacy-to-new' }, }); const result = await processBatchUpdates( 1000, checkpoint?.lastProcessedId || null ); console.log('Migration complete:', result);}Choose batch size based on: transaction timeout limits, lock contention tolerance, and memory constraints. Start with 1000 and tune based on monitoring. Too small = overhead dominates. Too large = long locks and timeout risk.
When transactions span multiple services or databases, traditional ACID guarantees become impractical. The Saga pattern provides an alternative: a sequence of local transactions with compensating actions for rollback.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156
import { PrismaClient } from '@prisma/client'; const prisma = new PrismaClient(); interface SagaStep<T> { name: string; execute: () => Promise<T>; compensate: () => Promise<void>;} class Saga { private completedSteps: SagaStep<any>[] = []; async execute<T>(steps: SagaStep<any>[]): Promise<{ success: boolean; error?: Error }> { for (const step of steps) { try { console.log(`Executing saga step: ${step.name}`); await step.execute(); this.completedSteps.push(step); } catch (error) { console.error(`Saga step failed: ${step.name}`, error); await this.compensate(); return { success: false, error: error as Error }; } } return { success: true }; } private async compensate(): Promise<void> { console.log('Starting saga compensation...'); // Compensate in reverse order for (const step of this.completedSteps.reverse()) { try { console.log(`Compensating step: ${step.name}`); await step.compensate(); } catch (error) { // Log but continue compensating other steps console.error(`Compensation failed for ${step.name}`, error); // In production: alert for manual intervention } } }} // Order creation saga spanning multiple servicesasync function createOrderSaga(request: CreateOrderRequest): Promise<string> { const saga = new Saga(); let paymentId: string; let reservationId: string; let shipmentId: string; let orderId: string; const steps: SagaStep<any>[] = [ { name: 'Create order record', execute: async () => { // Local transaction in Order database orderId = await prisma.$transaction(async (tx) => { const order = await tx.order.create({ data: { customerId: request.customerId, status: 'PENDING', total: request.total, }, }); for (const item of request.items) { await tx.orderItem.create({ data: { orderId: order.id, ...item }, }); } return order.id; }); return orderId; }, compensate: async () => { await prisma.order.update({ where: { id: orderId }, data: { status: 'CANCELLED' }, }); }, }, { name: 'Charge payment', execute: async () => { // Call Payment Service (external) paymentId = await paymentService.charge({ customerId: request.customerId, amount: request.total, orderId: orderId, }); return paymentId; }, compensate: async () => { await paymentService.refund(paymentId); }, }, { name: 'Reserve inventory', execute: async () => { // Call Inventory Service (external) reservationId = await inventoryService.reserve({ items: request.items, orderId: orderId, }); return reservationId; }, compensate: async () => { await inventoryService.releaseReservation(reservationId); }, }, { name: 'Create shipment', execute: async () => { // Call Shipping Service (external) shipmentId = await shippingService.createShipment({ orderId: orderId, addressId: request.shippingAddressId, items: request.items, }); return shipmentId; }, compensate: async () => { await shippingService.cancelShipment(shipmentId); }, }, { name: 'Finalize order', execute: async () => { await prisma.order.update({ where: { id: orderId }, data: { status: 'CONFIRMED', paymentId, reservationId, shipmentId, }, }); }, compensate: async () => { // Already handled by first step compensation }, }, ]; const result = await saga.execute(steps); if (!result.success) { throw new Error(`Order creation failed: ${result.error?.message}`); } return orderId;}Sagas provide eventual consistency, not ACID. Compensating actions may fail, requiring manual intervention. Design compensations to be idempotent (safe to retry). Monitor for 'stuck' sagas that couldn't fully compensate.
Let's consolidate the patterns we've seen into a quick reference guide:
| Pattern | When to Use | Key Technique | Watch Out For |
|---|---|---|---|
| Money Transfer | Moving value between accounts | SELECT FOR UPDATE on accounts | Lock ordering to prevent deadlocks |
| Inventory Reserve | Reserving limited stock | Pessimistic or optimistic locking | Overselling during high traffic |
| Order Processing | Multi-entity creation | Single transaction, all-or-nothing | Long lock hold times |
| User Registration | Creating related records together | Transaction for data, external calls outside | Email failures shouldn't rollback |
| Batch Processing | Large dataset updates | Commit every N records, checkpoint | Resume from checkpoint on failure |
| Distributed Saga | Cross-service operations | Local transactions + compensating actions | Compensation failure handling |
Through these real-world examples, we've seen how transaction principles apply to actual business scenarios. Let's consolidate the key learnings:
Module Complete:
This completes Module 1: Transaction Definition. You now have a comprehensive understanding of:
The next module explores ACID Properties in depth—the four guarantees that make transactions reliable: Atomicity, Consistency, Isolation, and Durability.
Congratulations! You've completed Module 1: Transaction Definition. You now have the foundational knowledge to design and implement reliable database transactions for real-world applications.