Loading content...
We've covered the mechanics of triggers—timing, events, granularity. Now let's see how these capabilities translate into real-world production systems.
Triggers shine in scenarios where:
This page presents battle-tested patterns used in production systems across industries—from financial auditing to e-commerce inventory management to healthcare compliance.
By the end of this page, you will master practical trigger patterns for: comprehensive audit logging, temporal/history tables, derived data maintenance, complex business rules, soft deletes, notifications, and cross-table integrity. You'll also understand anti-patterns that cause production incidents.
Audit logging is the most common trigger use case. Regulations like SOX, HIPAA, GDPR, and PCI-DSS require tracking who changed what data, when, and from where. Triggers ensure this tracking happens regardless of how data is modified—even if someone runs raw SQL against the database.
Requirements for Production Audit Systems:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106
-- Production-Grade Audit Logging System-- PostgreSQL syntax -- Generic audit table (works for any table)CREATE TABLE audit_log ( audit_id BIGSERIAL PRIMARY KEY, table_schema VARCHAR(63) NOT NULL, table_name VARCHAR(63) NOT NULL, operation VARCHAR(10) NOT NULL, -- INSERT, UPDATE, DELETE record_pk JSONB NOT NULL, -- Primary key values as JSON old_values JSONB, -- NULL for INSERT new_values JSONB, -- NULL for DELETE changed_fields TEXT[], -- List of modified columns -- Attribution session_user_name TEXT DEFAULT SESSION_USER, application_name TEXT DEFAULT CURRENT_SETTING('application_name', TRUE), client_addr INET DEFAULT INET_CLIENT_ADDR(), client_port INTEGER DEFAULT INET_CLIENT_PORT(), -- Timing action_timestamp TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP, transaction_id BIGINT DEFAULT TXID_CURRENT(), -- Context (application-level info passed via session variables) app_user_id TEXT DEFAULT CURRENT_SETTING('app.user_id', TRUE), app_request_id TEXT DEFAULT CURRENT_SETTING('app.request_id', TRUE), app_reason TEXT DEFAULT CURRENT_SETTING('app.change_reason', TRUE)); -- Indexes for common queriesCREATE INDEX idx_audit_table ON audit_log(table_name, action_timestamp DESC);CREATE INDEX idx_audit_user ON audit_log(app_user_id, action_timestamp DESC);CREATE INDEX idx_audit_pk ON audit_log USING GIN(record_pk); -- Generic audit trigger function (reusable for any table)CREATE OR REPLACE FUNCTION audit_trigger_fn()RETURNS TRIGGER AS $$DECLARE v_old JSONB; v_new JSONB; v_pk JSONB; v_changed TEXT[] := '{}'; v_key TEXT;BEGIN -- Construct primary key JSON (assumes 'id' column; customize as needed) -- For composite keys, this could be extended IF TG_OP = 'DELETE' THEN v_pk := JSONB_BUILD_OBJECT('id', OLD.id); ELSE v_pk := JSONB_BUILD_OBJECT('id', NEW.id); END IF; IF TG_OP = 'INSERT' THEN v_new := TO_JSONB(NEW); ELSIF TG_OP = 'UPDATE' THEN v_old := TO_JSONB(OLD); v_new := TO_JSONB(NEW); -- Build list of changed fields FOR v_key IN SELECT JSONB_OBJECT_KEYS(v_old) LOOP IF v_old->v_key IS DISTINCT FROM v_new->v_key THEN v_changed := ARRAY_APPEND(v_changed, v_key); END IF; END LOOP; -- Skip if nothing changed (e.g., UPDATE SET col = col) IF ARRAY_LENGTH(v_changed, 1) IS NULL THEN RETURN NEW; END IF; ELSIF TG_OP = 'DELETE' THEN v_old := TO_JSONB(OLD); END IF; INSERT INTO audit_log ( table_schema, table_name, operation, record_pk, old_values, new_values, changed_fields ) VALUES ( TG_TABLE_SCHEMA, TG_TABLE_NAME, TG_OP, v_pk, v_old, v_new, v_changed ); RETURN COALESCE(NEW, OLD);END;$$ LANGUAGE plpgsql SECURITY DEFINER; -- Apply to any table with a single commandCREATE TRIGGER audit_employees AFTER INSERT OR UPDATE OR DELETE ON employees FOR EACH ROW EXECUTE FUNCTION audit_trigger_fn(); CREATE TRIGGER audit_orders AFTER INSERT OR UPDATE OR DELETE ON orders FOR EACH ROW EXECUTE FUNCTION audit_trigger_fn(); -- Query examples:-- Who changed employee 123?-- SELECT * FROM audit_log WHERE table_name = 'employees' -- AND record_pk @> '{"id": 123}' ORDER BY action_timestamp DESC; -- What did user john.doe change today?-- SELECT * FROM audit_log WHERE app_user_id = 'john.doe'-- AND action_timestamp >= CURRENT_DATE;Using JSONB for old_values and new_values allows auditing any table structure without schema changes to the audit table. You can query specific fields with JSONB operators: old_values->>'salary' or search with containment: new_values @> '{"status": "active"}'.
Temporal tables (also called slowly changing dimensions or history tables) maintain a complete history of all changes over time. Unlike audit logs that track changes, temporal tables let you query what the data looked like at any point in time.
The Pattern:
valid_from and valid_to timestampsvalid_to = NULL (or a far-future date)This is essential for:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106
-- Temporal Table with Trigger-Based History-- PostgreSQL syntax -- Main table (current data only view)CREATE TABLE products ( product_id SERIAL, product_name VARCHAR(255) NOT NULL, category_id INTEGER, price DECIMAL(10,2) NOT NULL, is_active BOOLEAN DEFAULT TRUE, -- Temporal columns valid_from TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP, valid_to TIMESTAMPTZ, -- NULL = current version -- Tracking version INTEGER NOT NULL DEFAULT 1, modified_by VARCHAR(128) DEFAULT CURRENT_USER, PRIMARY KEY (product_id, valid_from), -- Ensure no overlapping validity periods EXCLUDE USING gist ( product_id WITH =, tstzrange(valid_from, valid_to, '[)') WITH && )); -- View for current data (what most queries should use)CREATE OR REPLACE VIEW products_current ASSELECT product_id, product_name, category_id, price, is_active, valid_from, version, modified_byFROM productsWHERE valid_to IS NULL; -- Trigger for UPDATE: close current, insert new versionCREATE OR REPLACE FUNCTION products_temporal_update()RETURNS TRIGGER AS $$BEGIN -- Don't allow direct update on historical rows IF OLD.valid_to IS NOT NULL THEN RAISE EXCEPTION 'Cannot update historical product records'; END IF; -- Close the current version UPDATE products SET valid_to = CURRENT_TIMESTAMP WHERE product_id = OLD.product_id AND valid_to IS NULL; -- Insert new version INSERT INTO products ( product_id, product_name, category_id, price, is_active, valid_from, valid_to, version, modified_by ) VALUES ( OLD.product_id, NEW.product_name, NEW.category_id, NEW.price, NEW.is_active, CURRENT_TIMESTAMP, NULL, OLD.version + 1, CURRENT_USER ); -- Return NULL to cancel the original UPDATE (we did our own) RETURN NULL;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_products_temporal_update BEFORE UPDATE ON products FOR EACH ROW WHEN (OLD.valid_to IS NULL) -- Only trigger for current rows EXECUTE FUNCTION products_temporal_update(); -- Trigger for DELETE: soft delete by closing validityCREATE OR REPLACE FUNCTION products_temporal_delete()RETURNS TRIGGER AS $$BEGIN IF OLD.valid_to IS NOT NULL THEN RAISE EXCEPTION 'Cannot delete already-closed product records'; END IF; -- Close the record instead of deleting UPDATE products SET valid_to = CURRENT_TIMESTAMP WHERE product_id = OLD.product_id AND valid_to IS NULL; RETURN NULL; -- Cancel the DELETEEND;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_products_temporal_delete BEFORE DELETE ON products FOR EACH ROW EXECUTE FUNCTION products_temporal_delete(); -- Queries:-- Current price of product 123:-- SELECT price FROM products_current WHERE product_id = 123; -- Price of product 123 on 2024-06-15:-- SELECT price FROM products -- WHERE product_id = 123-- AND valid_from <= '2024-06-15'-- AND (valid_to > '2024-06-15' OR valid_to IS NULL); -- Complete price history:-- SELECT valid_from, valid_to, price -- FROM products WHERE product_id = 123 ORDER BY valid_from;SQL:2011 defines native temporal table support with 'PERIOD FOR' and 'AS OF SYSTEM TIME' syntax. Some DBMS (SQL Server, DB2, MariaDB) support this natively. In PostgreSQL, you implement temporal behavior via triggers as shown. Native support is more efficient and integrates with query optimization.
In normalized databases, computing aggregates (totals, counts, averages) requires joins and can be expensive for frequently-accessed data. Derived data maintenance uses triggers to keep denormalized summary values in sync with source data.
Examples:
Triggers ensure these derived values update automatically, staying consistent without application-layer coordination.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697
-- Derived Data Maintenance: Order Totals-- PostgreSQL syntax -- SchemaCREATE TABLE orders ( order_id SERIAL PRIMARY KEY, customer_id INTEGER NOT NULL, order_date DATE DEFAULT CURRENT_DATE, -- Derived (denormalized) data item_count INTEGER DEFAULT 0, subtotal DECIMAL(12,2) DEFAULT 0, tax_amount DECIMAL(12,2) DEFAULT 0, total_amount DECIMAL(12,2) DEFAULT 0, last_updated_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP); CREATE TABLE order_items ( item_id SERIAL PRIMARY KEY, order_id INTEGER NOT NULL REFERENCES orders(order_id) ON DELETE CASCADE, product_id INTEGER NOT NULL, quantity INTEGER NOT NULL CHECK (quantity > 0), unit_price DECIMAL(10,2) NOT NULL, discount_pct DECIMAL(5,2) DEFAULT 0, line_total DECIMAL(12,2) GENERATED ALWAYS AS (quantity * unit_price * (1 - discount_pct/100)) STORED); -- Trigger function to maintain order totalsCREATE OR REPLACE FUNCTION sync_order_totals()RETURNS TRIGGER AS $$DECLARE v_order_id INTEGER; v_item_count INTEGER; v_subtotal DECIMAL(12,2); v_tax_rate DECIMAL(5,4) := 0.0825; -- 8.25% taxBEGIN -- Determine which order to update v_order_id := COALESCE(NEW.order_id, OLD.order_id); -- Recalculate from source data (ensures accuracy) SELECT COALESCE(COUNT(*), 0), COALESCE(SUM(line_total), 0) INTO v_item_count, v_subtotal FROM order_items WHERE order_id = v_order_id; -- Update the order's cached totals UPDATE orders SET item_count = v_item_count, subtotal = v_subtotal, tax_amount = ROUND(v_subtotal * v_tax_rate, 2), total_amount = ROUND(v_subtotal * (1 + v_tax_rate), 2), last_updated_at = CURRENT_TIMESTAMP WHERE order_id = v_order_id; RETURN COALESCE(NEW, OLD);END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_sync_order_totals AFTER INSERT OR UPDATE OR DELETE ON order_items FOR EACH ROW EXECUTE FUNCTION sync_order_totals(); -- Statement-level alternative (more efficient for bulk)CREATE OR REPLACE FUNCTION sync_order_totals_batch()RETURNS TRIGGER AS $$BEGIN UPDATE orders o SET item_count = agg.cnt, subtotal = agg.sub, tax_amount = ROUND(agg.sub * 0.0825, 2), total_amount = ROUND(agg.sub * 1.0825, 2), last_updated_at = CURRENT_TIMESTAMP FROM ( SELECT order_id, COUNT(*) AS cnt, COALESCE(SUM(line_total), 0) AS sub FROM order_items WHERE order_id IN ( SELECT DISTINCT COALESCE(order_id, NULL) FROM new_items UNION SELECT DISTINCT COALESCE(order_id, NULL) FROM old_items ) GROUP BY order_id ) agg WHERE o.order_id = agg.order_id; RETURN NULL;END;$$ LANGUAGE plpgsql;For derived totals, always recalculate from source rather than doing total = total + new_value. Incremental updates can drift due to edge cases (concurrent updates, rollbacks, bugs). Recalculation is slightly more expensive but guarantees accuracy.
Some business rules are too complex for declarative constraints:
Triggers can enforce these rules at the database level, preventing invalid data regardless of how it's entered.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
-- Business Rule: State Machine Enforcement-- PostgreSQL syntax -- Valid ticket state transitions-- open -> in_progress -> resolved -> closed-- open -> cancelled-- in_progress -> on_hold -> in_progress (bidirectional)-- resolved -> reopened -> in_progress CREATE TABLE ticket_states ( from_state VARCHAR(20), to_state VARCHAR(20), requires_comment BOOLEAN DEFAULT FALSE, PRIMARY KEY (from_state, to_state)); INSERT INTO ticket_states (from_state, to_state, requires_comment) VALUES ('open', 'in_progress', FALSE), ('open', 'cancelled', TRUE), ('in_progress', 'on_hold', TRUE), ('in_progress', 'resolved', FALSE), ('on_hold', 'in_progress', FALSE), ('resolved', 'closed', FALSE), ('resolved', 'reopened', TRUE), ('reopened', 'in_progress', FALSE); CREATE OR REPLACE FUNCTION enforce_ticket_state_machine()RETURNS TRIGGER AS $$DECLARE v_transition_valid BOOLEAN; v_requires_comment BOOLEAN;BEGIN -- Check if transition is allowed SELECT TRUE, requires_comment INTO v_transition_valid, v_requires_comment FROM ticket_states WHERE from_state = OLD.status AND to_state = NEW.status; IF NOT FOUND THEN RAISE EXCEPTION 'Invalid status transition from "%" to "%"', OLD.status, NEW.status USING HINT = ( SELECT STRING_AGG(to_state, ', ') FROM ticket_states WHERE from_state = OLD.status ); END IF; -- Check if comment is required for this transition IF v_requires_comment AND (NEW.status_comment IS NULL OR TRIM(NEW.status_comment) = '') THEN RAISE EXCEPTION 'Transition from "%" to "%" requires a comment', OLD.status, NEW.status; END IF; -- Set transition metadata NEW.status_changed_at := CURRENT_TIMESTAMP; NEW.status_changed_by := CURRENT_USER; -- Record in status history INSERT INTO ticket_status_history ( ticket_id, from_status, to_status, changed_by, changed_at, comment ) VALUES ( NEW.ticket_id, OLD.status, NEW.status, CURRENT_USER, CURRENT_TIMESTAMP, NEW.status_comment ); RETURN NEW;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_ticket_state_machine BEFORE UPDATE OF status ON tickets FOR EACH ROW WHEN (OLD.status IS DISTINCT FROM NEW.status) EXECUTE FUNCTION enforce_ticket_state_machine();1234567891011121314151617181920212223242526272829303132333435363738394041424344
-- Business Rule: Aggregate Constraint-- PostgreSQL syntax -- Rule: Total salaries in a department cannot exceed budget CREATE OR REPLACE FUNCTION enforce_department_budget()RETURNS TRIGGER AS $$DECLARE v_current_total DECIMAL(12,2); v_budget DECIMAL(12,2); v_dept_name VARCHAR(100);BEGIN -- Get department budget SELECT budget_amount, dept_name INTO v_budget, v_dept_name FROM departments WHERE dept_id = NEW.dept_id; -- Calculate total including this change SELECT COALESCE(SUM( CASE WHEN employee_id = NEW.employee_id THEN NEW.salary ELSE salary END ), 0) INTO v_current_total FROM employees WHERE dept_id = NEW.dept_id; IF v_current_total > v_budget THEN RAISE EXCEPTION 'Salary would exceed department "%" budget. ' || 'Budget: $%, Current Total: $%, Requested: $%', v_dept_name, v_budget, v_current_total - NEW.salary, NEW.salary USING HINT = 'Request a budget increase or reduce the salary'; END IF; RETURN NEW;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_enforce_budget BEFORE INSERT OR UPDATE OF salary, dept_id ON employees FOR EACH ROW EXECUTE FUNCTION enforce_department_budget();Store business rules (valid transitions, limits, thresholds) in configuration tables rather than hardcoding in triggers. This allows rules to be modified without redeploying triggers, and makes the rules visible to applications and reports.
Soft deletes mark records as deleted without actually removing them. This preserves data for:
Triggers can transparently convert DELETEs into UPDATEs that set a "deleted" flag.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586
-- Soft Delete Pattern-- PostgreSQL syntax -- Schema includes soft delete columnsCREATE TABLE customers ( customer_id SERIAL PRIMARY KEY, customer_name VARCHAR(100) NOT NULL, email VARCHAR(255), created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP, -- Soft delete columns is_deleted BOOLEAN DEFAULT FALSE, deleted_at TIMESTAMPTZ, deleted_by VARCHAR(128), deletion_reason TEXT); -- Index for efficient exclusion of deleted recordsCREATE INDEX idx_customers_active ON customers(customer_id) WHERE NOT is_deleted; -- View that hides deleted records (standard access)CREATE OR REPLACE VIEW customers_active ASSELECT customer_id, customer_name, email, created_atFROM customersWHERE NOT is_deleted; -- Trigger to intercept DELETE and convert to soft deleteCREATE OR REPLACE FUNCTION soft_delete_customer()RETURNS TRIGGER AS $$BEGIN -- Convert DELETE to UPDATE UPDATE customers SET is_deleted = TRUE, deleted_at = CURRENT_TIMESTAMP, deleted_by = CURRENT_USER, deletion_reason = CURRENT_SETTING('app.delete_reason', TRUE) WHERE customer_id = OLD.customer_id; -- Return NULL to cancel the actual DELETE RETURN NULL;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_soft_delete_customer BEFORE DELETE ON customers FOR EACH ROW EXECUTE FUNCTION soft_delete_customer(); -- Procedure to permanently delete (bypasses trigger)CREATE OR REPLACE PROCEDURE purge_deleted_customers( p_older_than INTERVAL DEFAULT INTERVAL '90 days')LANGUAGE plpgsql AS $$BEGIN -- Temporarily disable the soft delete trigger ALTER TABLE customers DISABLE TRIGGER trg_soft_delete_customer; -- Perform actual delete DELETE FROM customers WHERE is_deleted = TRUE AND deleted_at < CURRENT_TIMESTAMP - p_older_than; -- Re-enable the trigger ALTER TABLE customers ENABLE TRIGGER trg_soft_delete_customer;END;$$; -- Restore functionCREATE OR REPLACE FUNCTION restore_customer(p_customer_id INTEGER)RETURNS VOID AS $$BEGIN UPDATE customers SET is_deleted = FALSE, deleted_at = NULL, deleted_by = NULL, deletion_reason = NULL WHERE customer_id = p_customer_id AND is_deleted = TRUE; IF NOT FOUND THEN RAISE EXCEPTION 'Customer % not found or not deleted', p_customer_id; END IF;END;$$ LANGUAGE plpgsql;Soft deletes can conflict with unique constraints. If email must be unique and a user soft-deletes then tries to re-register, the old record blocks them. Solutions include: (1) unique partial index excluding deleted rows, (2) appending timestamp to soft-deleted values, (3) moving to a separate archive table.
Triggers can publish events for external systems to consume—enabling event-driven architectures where the database is the source of truth.
Common Patterns:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
-- Notification Pattern 1: PostgreSQL LISTEN/NOTIFY-- PostgreSQL syntax CREATE OR REPLACE FUNCTION notify_order_changes()RETURNS TRIGGER AS $$DECLARE v_payload JSONB;BEGIN v_payload := JSONB_BUILD_OBJECT( 'operation', TG_OP, 'timestamp', CURRENT_TIMESTAMP, 'order_id', COALESCE(NEW.order_id, OLD.order_id), 'customer_id', COALESCE(NEW.customer_id, OLD.customer_id) ); IF TG_OP = 'UPDATE' THEN v_payload := v_payload || JSONB_BUILD_OBJECT( 'old_status', OLD.status, 'new_status', NEW.status ); END IF; -- Broadcast to 'order_events' channel PERFORM pg_notify('order_events', v_payload::TEXT); RETURN COALESCE(NEW, OLD);END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_notify_order_changes AFTER INSERT OR UPDATE OR DELETE ON orders FOR EACH ROW EXECUTE FUNCTION notify_order_changes(); -- Application listens with:-- LISTEN order_events; ------------------------------------------------- -- Notification Pattern 2: Transactional Outbox-- (Reliable event delivery - survives crashes) CREATE TABLE event_outbox ( event_id UUID PRIMARY KEY DEFAULT GEN_RANDOM_UUID(), event_type VARCHAR(100) NOT NULL, aggregate_type VARCHAR(100) NOT NULL, aggregate_id TEXT NOT NULL, payload JSONB NOT NULL, created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP, published_at TIMESTAMPTZ, -- NULL = not yet published -- For ordering and deduplication sequence_number BIGSERIAL); CREATE INDEX idx_outbox_unpublished ON event_outbox(created_at) WHERE published_at IS NULL; CREATE OR REPLACE FUNCTION publish_order_events()RETURNS TRIGGER AS $$BEGIN INSERT INTO event_outbox (event_type, aggregate_type, aggregate_id, payload) VALUES ( 'order.' || LOWER(TG_OP), 'order', COALESCE(NEW.order_id, OLD.order_id)::TEXT, CASE WHEN TG_OP = 'DELETE' THEN TO_JSONB(OLD) ELSE TO_JSONB(NEW) END ); RETURN COALESCE(NEW, OLD);END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_outbox_orders AFTER INSERT OR UPDATE OR DELETE ON orders FOR EACH ROW EXECUTE FUNCTION publish_order_events(); -- External worker polls:-- SELECT * FROM event_outbox WHERE published_at IS NULL -- ORDER BY sequence_number LIMIT 100; -- After successful publish:-- UPDATE event_outbox SET published_at = CURRENT_TIMESTAMP -- WHERE event_id = <id>;The outbox pattern ensures reliable event delivery by writing events to a table in the same transaction as the data change. An external process reads and forwards events, marking them published. This provides at-least-once delivery semantics and survives application crashes.
Triggers are powerful but easy to misuse. These anti-patterns cause production incidents:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
-- Preventing Trigger Recursion-- PostgreSQL syntax -- Method 1: Disable trigger temporarily (be careful with concurrency)CREATE OR REPLACE FUNCTION cascading_update_safe()RETURNS TRIGGER AS $$BEGIN -- Disable this trigger during our UPDATE ALTER TABLE related_table DISABLE TRIGGER trg_cascade; UPDATE related_table SET ... WHERE ...; -- Re-enable ALTER TABLE related_table ENABLE TRIGGER trg_cascade; RETURN NEW;END;$$ LANGUAGE plpgsql; -- Method 2: Session variable flag (safer)CREATE OR REPLACE FUNCTION cascading_update_flagged()RETURNS TRIGGER AS $$BEGIN -- Check if we're already in a cascade IF CURRENT_SETTING('app.in_cascade', TRUE) = 'true' THEN RETURN NEW; -- Skip to prevent recursion END IF; -- Set flag before cascading PERFORM SET_CONFIG('app.in_cascade', 'true', TRUE); UPDATE related_table SET ... WHERE ...; -- Clear flag PERFORM SET_CONFIG('app.in_cascade', 'false', TRUE); RETURN NEW;END;$$ LANGUAGE plpgsql; -- Method 3: pg_trigger_depth() (most reliable)CREATE OR REPLACE FUNCTION cascading_update_depth()RETURNS TRIGGER AS $$BEGIN -- Limit recursion depth IF PG_TRIGGER_DEPTH() > 3 THEN RAISE WARNING 'Trigger recursion depth exceeded'; RETURN NEW; END IF; UPDATE related_table SET ... WHERE ...; RETURN NEW;END;$$ LANGUAGE plpgsql;Triggers tested with single-row operations may fail catastrophically with bulk operations. Always test with realistic data volumes: if a bulk UPDATE affects 100,000 rows, the trigger runs 100,000 times. Profile before production deployment.
We've covered the most important real-world trigger applications. Let's consolidate:
Module Complete:
You've now mastered database triggers comprehensively:
Triggers are a powerful tool that, used judiciously, can ensure data consistency regardless of access path. Used carelessly, they create maintainability nightmares. Apply the patterns in this module wisely.
You now have world-class knowledge of database triggers—from conceptual foundations to production patterns. You can design, implement, and maintain trigger-based solutions for audit logging, temporal tables, derived data, business rules, soft deletes, and event publishing.