Loading learning content...
Entity integrity ensures each row is identifiable. Referential integrity ensures relationships are valid. Domain constraints ensure values are syntactically correct. Key constraints ensure uniqueness. But data correctness demands more than these structural guarantees—it demands semantic correctness.
Semantic constraints (also called business rules, application constraints, or general constraints) capture the complex logical requirements that give data its meaning. An employee's end date must be after their start date. A manager must belong to the same department as their employees. An order total must equal the sum of its line items. A bank account balance must never go negative.
These rules reflect the real-world logic of the domain being modeled. Violating them doesn't break the database structure—it breaks the meaning of the data, rendering it unreliable for business decisions.
This page provides an exhaustive exploration of semantic constraints, examining their nature, implementation mechanisms (from CHECK constraints to triggers to application code), and the critical design decisions involved in placing business logic at the right layer of your system.
By the end of this page, you will understand: (1) The nature and classification of semantic constraints, (2) Database-level enforcement mechanisms (CHECK, triggers, stored procedures), (3) Multi-row and multi-table constraints, (4) Temporal and state transition constraints, (5) Derived data and aggregate constraints, and (6) The architecture of constraint enforcement layers.
Semantic constraints vary widely in complexity and scope. Understanding their classification helps in choosing appropriate enforcement mechanisms.
| Type | Scope | Examples | Typical Enforcement |
|---|---|---|---|
| Single-Row | One row, one table | end_date > start_date; age >= 18 | CHECK constraint |
| Multi-Row, Same Table | Multiple rows, one table | Only one 'primary' address per customer | Partial UNIQUE index; trigger |
| Multi-Table | Multiple tables | Order total = sum of line items | Trigger; stored procedure |
| Aggregate | Computed from many rows | Account balance = sum of transactions | Trigger; materialized view |
| Temporal | Time-based conditions | Contract dates cannot overlap for same customer | Exclusion constraint; trigger |
| State Transition | Valid state changes | Order status: pending→confirmed→shipped (not shipped→pending) | Trigger; application layer |
| Cardinality | Relationship counts | Department must have at least one employee | Trigger; deferred constraint |
The complexity gradient:
As we move from single-row constraints to multi-table and aggregate constraints, enforcement complexity increases significantly:
SQL's CHECK constraints are limited to row-level validation and cannot reference other tables or aggregate functions. For complex semantic constraints, we must use procedural mechanisms (triggers) or application code. This is a fundamental limitation of SQL's declarative constraint model.
The simplest semantic constraints involve relationships between attributes within a single row. These can be expressed using CHECK constraints and are fully validated by the database engine.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687
-- Date ordering constraintsCREATE TABLE employment ( employment_id SERIAL PRIMARY KEY, employee_id INT NOT NULL REFERENCES employees(employee_id), position VARCHAR(100) NOT NULL, start_date DATE NOT NULL, end_date DATE, -- End date must be after start date (if provided) CONSTRAINT chk_date_order CHECK ( end_date IS NULL OR end_date > start_date )); -- Conditional field requirementsCREATE TABLE products ( product_id SERIAL PRIMARY KEY, product_name VARCHAR(200) NOT NULL, product_type VARCHAR(20) NOT NULL, -- Physical dimensions (required for physical products) weight_kg DECIMAL(10,3), length_cm DECIMAL(10,2), width_cm DECIMAL(10,2), height_cm DECIMAL(10,2), -- Digital product fields (required for digital products) file_size_mb DECIMAL(10,2), download_url VARCHAR(500), CONSTRAINT chk_physical_dimensions CHECK ( product_type != 'physical' OR ( weight_kg IS NOT NULL AND length_cm IS NOT NULL AND width_cm IS NOT NULL AND height_cm IS NOT NULL ) ), CONSTRAINT chk_digital_fields CHECK ( product_type != 'digital' OR ( file_size_mb IS NOT NULL AND download_url IS NOT NULL ) )); -- Percentage allocations must sum correctlyCREATE TABLE project_allocations ( allocation_id SERIAL PRIMARY KEY, employee_id INT NOT NULL, project_a_percent INT NOT NULL DEFAULT 0, project_b_percent INT NOT NULL DEFAULT 0, project_c_percent INT NOT NULL DEFAULT 0, unallocated_percent INT NOT NULL DEFAULT 100, -- Each percentage must be valid CONSTRAINT chk_valid_percentages CHECK ( project_a_percent >= 0 AND project_a_percent <= 100 AND project_b_percent >= 0 AND project_b_percent <= 100 AND project_c_percent >= 0 AND project_c_percent <= 100 AND unallocated_percent >= 0 AND unallocated_percent <= 100 ), -- Total must equal 100% CONSTRAINT chk_total_allocation CHECK ( project_a_percent + project_b_percent + project_c_percent + unallocated_percent = 100 )); -- Business logic: discounted price must be less than regular priceCREATE TABLE pricing ( pricing_id SERIAL PRIMARY KEY, product_id INT NOT NULL, regular_price DECIMAL(10,2) NOT NULL, sale_price DECIMAL(10,2), min_quantity INT DEFAULT 1, max_quantity INT, CONSTRAINT chk_sale_less_than_regular CHECK ( sale_price IS NULL OR sale_price < regular_price ), CONSTRAINT chk_quantity_range CHECK ( max_quantity IS NULL OR max_quantity >= min_quantity ));Complex boolean expressions in CHECK constraints become maintenance nightmares. Consider breaking complex logic into multiple named constraints, each testing one condition. The constraint names appear in error messages, helping users understand what went wrong.
Some constraints require examining multiple rows in the same table. These cannot be expressed with CHECK constraints and require alternative mechanisms.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798
-- SCENARIO 1: Only one 'primary' record per entity-- Solution: Partial UNIQUE index CREATE TABLE customer_addresses ( address_id SERIAL PRIMARY KEY, customer_id INT NOT NULL REFERENCES customers(customer_id), address_type VARCHAR(20) NOT NULL, -- 'billing', 'shipping', 'other' is_primary BOOLEAN NOT NULL DEFAULT FALSE, street_address VARCHAR(255) NOT NULL, city VARCHAR(100) NOT NULL, postal_code VARCHAR(20)); -- Only one primary address per customer (PostgreSQL)CREATE UNIQUE INDEX uq_customer_primary_addressON customer_addresses (customer_id)WHERE is_primary = TRUE; -- Attempting to create second primary address failsINSERT INTO customer_addresses (customer_id, address_type, is_primary, street_address, city)VALUES (1, 'shipping', TRUE, '123 Main St', 'Springfield');INSERT INTO customer_addresses (customer_id, address_type, is_primary, street_address, city)VALUES (1, 'billing', TRUE, '456 Oak Ave', 'Springfield');-- ERROR: duplicate key value violates unique constraint "uq_customer_primary_address" -- SCENARIO 2: Non-overlapping date ranges-- Solution: Exclusion constraint (PostgreSQL) CREATE EXTENSION IF NOT EXISTS btree_gist; -- Required for exclusion constraints CREATE TABLE room_bookings ( booking_id SERIAL PRIMARY KEY, room_id INT NOT NULL REFERENCES rooms(room_id), guest_name VARCHAR(100) NOT NULL, check_in DATE NOT NULL, check_out DATE NOT NULL, -- Basic validity CONSTRAINT chk_checkout_after_checkin CHECK (check_out > check_in), -- Exclusion: same room cannot overlap CONSTRAINT excl_no_overlapping_bookings EXCLUDE USING gist ( room_id WITH =, daterange(check_in, check_out, '[)') WITH && )); -- First booking succeedsINSERT INTO room_bookings (room_id, guest_name, check_in, check_out)VALUES (101, 'Alice', '2024-03-01', '2024-03-05'); -- Overlapping booking failsINSERT INTO room_bookings (room_id, guest_name, check_in, check_out)VALUES (101, 'Bob', '2024-03-03', '2024-03-07');-- ERROR: conflicting key value violates exclusion constraint -- Adjacent booking succeeds (checkin = previous checkout)INSERT INTO room_bookings (room_id, guest_name, check_in, check_out)VALUES (101, 'Carol', '2024-03-05', '2024-03-08'); -- SCENARIO 3: Sequential ordering with no gaps-- Solution: Trigger (works on all databases) CREATE TABLE invoice_lines ( invoice_id INT NOT NULL, line_number INT NOT NULL, description VARCHAR(200) NOT NULL, amount DECIMAL(10,2) NOT NULL, PRIMARY KEY (invoice_id, line_number)); -- Trigger to enforce sequential line numbersCREATE OR REPLACE FUNCTION check_line_number_sequence()RETURNS TRIGGER AS $$DECLARE max_line INT;BEGIN -- Get current max line number for this invoice SELECT COALESCE(MAX(line_number), 0) INTO max_line FROM invoice_lines WHERE invoice_id = NEW.invoice_id; -- New line must be exactly max + 1 IF NEW.line_number != max_line + 1 THEN RAISE EXCEPTION 'Line number must be %, got %', max_line + 1, NEW.line_number; END IF; RETURN NEW;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_invoice_line_sequenceBEFORE INSERT ON invoice_linesFOR EACH ROWEXECUTE FUNCTION check_line_number_sequence();The EXCLUDE constraint with GIST index is a powerful PostgreSQL feature for preventing overlaps (temporal data, spatial data). Other databases require triggers to achieve similar functionality. Always verify feature availability when designing for portability.
Many business rules span multiple tables. An order total must match its line items. A manager must belong to the same department as their team. Stock levels must stay synchronized with orders. These constraints require triggers or application-layer enforcement.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143
-- SCENARIO 1: Order total must equal sum of line items-- TablesCREATE TABLE orders ( order_id SERIAL PRIMARY KEY, customer_id INT NOT NULL, order_date DATE NOT NULL DEFAULT CURRENT_DATE, total_amount DECIMAL(12,2) NOT NULL DEFAULT 0, status VARCHAR(20) NOT NULL DEFAULT 'pending'); CREATE TABLE order_items ( item_id SERIAL PRIMARY KEY, order_id INT NOT NULL REFERENCES orders(order_id), product_id INT NOT NULL, quantity INT NOT NULL CHECK (quantity > 0), unit_price DECIMAL(10,2) NOT NULL CHECK (unit_price >= 0), line_total DECIMAL(12,2) GENERATED ALWAYS AS (quantity * unit_price) STORED); -- Trigger to maintain order totalCREATE OR REPLACE FUNCTION update_order_total()RETURNS TRIGGER AS $$BEGIN -- Update the order total based on current items IF TG_OP = 'DELETE' THEN UPDATE orders SET total_amount = ( SELECT COALESCE(SUM(line_total), 0) FROM order_items WHERE order_id = OLD.order_id ) WHERE order_id = OLD.order_id; RETURN OLD; ELSE UPDATE orders SET total_amount = ( SELECT COALESCE(SUM(line_total), 0) FROM order_items WHERE order_id = NEW.order_id ) WHERE order_id = NEW.order_id; RETURN NEW; END IF;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_order_item_changeAFTER INSERT OR UPDATE OR DELETE ON order_itemsFOR EACH ROWEXECUTE FUNCTION update_order_total(); -- SCENARIO 2: Manager must be in same department as employeesCREATE TABLE departments ( dept_id SERIAL PRIMARY KEY, dept_name VARCHAR(100) NOT NULL); CREATE TABLE employees ( emp_id SERIAL PRIMARY KEY, emp_name VARCHAR(100) NOT NULL, dept_id INT REFERENCES departments(dept_id), manager_id INT REFERENCES employees(emp_id)); -- Trigger to enforce manager-employee department matchCREATE OR REPLACE FUNCTION check_manager_department()RETURNS TRIGGER AS $$DECLARE manager_dept INT;BEGIN -- Skip if no manager assigned IF NEW.manager_id IS NULL THEN RETURN NEW; END IF; -- Get manager's department SELECT dept_id INTO manager_dept FROM employees WHERE emp_id = NEW.manager_id; -- Check department match IF manager_dept IS DISTINCT FROM NEW.dept_id THEN RAISE EXCEPTION 'Employee department (%) must match manager department (%)', NEW.dept_id, manager_dept; END IF; RETURN NEW;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_check_manager_deptBEFORE INSERT OR UPDATE ON employeesFOR EACH ROWEXECUTE FUNCTION check_manager_department(); -- SCENARIO 3: Inventory levels must stay non-negativeCREATE TABLE inventory ( product_id INT PRIMARY KEY, quantity_on_hand INT NOT NULL DEFAULT 0 CHECK (quantity_on_hand >= 0)); CREATE TABLE order_placements ( placement_id SERIAL PRIMARY KEY, product_id INT NOT NULL, quantity INT NOT NULL CHECK (quantity > 0), placed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP); CREATE OR REPLACE FUNCTION check_and_reduce_inventory()RETURNS TRIGGER AS $$DECLARE current_stock INT;BEGIN -- Lock the inventory row to prevent race conditions SELECT quantity_on_hand INTO current_stock FROM inventory WHERE product_id = NEW.product_id FOR UPDATE; IF current_stock IS NULL THEN RAISE EXCEPTION 'Product % not found in inventory', NEW.product_id; END IF; IF current_stock < NEW.quantity THEN RAISE EXCEPTION 'Insufficient inventory: available %, requested %', current_stock, NEW.quantity; END IF; -- Reduce inventory UPDATE inventory SET quantity_on_hand = quantity_on_hand - NEW.quantity WHERE product_id = NEW.product_id; RETURN NEW;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_order_placementBEFORE INSERT ON order_placementsFOR EACH ROWEXECUTE FUNCTION check_and_reduce_inventory();Multi-table constraint triggers must handle concurrent access correctly. Use SELECT ... FOR UPDATE to lock related rows before checking and modifying them. Without proper locking, race conditions can violate constraints despite the trigger appearing to work correctly in isolation.
Many entities move through defined states, and not all transitions are valid. An order can go from 'pending' to 'confirmed' to 'shipped', but shipping a cancelled order should be prevented. State machines formalize these rules.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596
-- State machine enforcement via triggerCREATE TYPE order_status AS ENUM ( 'pending', 'confirmed', 'processing', 'shipped', 'delivered', 'returned', 'cancelled', 'refund_processed'); CREATE TABLE orders ( order_id SERIAL PRIMARY KEY, customer_id INT NOT NULL, status order_status NOT NULL DEFAULT 'pending', created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP); -- Define valid transitionsCREATE TABLE valid_status_transitions ( from_status order_status NOT NULL, to_status order_status NOT NULL, PRIMARY KEY (from_status, to_status)); INSERT INTO valid_status_transitions (from_status, to_status) VALUES ('pending', 'confirmed'), ('pending', 'cancelled'), ('confirmed', 'processing'), ('confirmed', 'cancelled'), ('processing', 'shipped'), ('processing', 'cancelled'), ('shipped', 'delivered'), ('shipped', 'returned'), ('delivered', 'returned'), ('returned', 'refund_processed'); -- Trigger to enforce valid transitionsCREATE OR REPLACE FUNCTION check_status_transition()RETURNS TRIGGER AS $$BEGIN -- Allow if status not changing IF NEW.status = OLD.status THEN RETURN NEW; END IF; -- Check if transition is valid IF NOT EXISTS ( SELECT 1 FROM valid_status_transitions WHERE from_status = OLD.status AND to_status = NEW.status ) THEN RAISE EXCEPTION 'Invalid status transition: % -> %', OLD.status, NEW.status; END IF; -- Update timestamp NEW.updated_at = CURRENT_TIMESTAMP; RETURN NEW;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_order_status_transitionBEFORE UPDATE ON ordersFOR EACH ROWEXECUTE FUNCTION check_status_transition(); -- Usage examplesUPDATE orders SET status = 'confirmed' WHERE order_id = 1; -- OKUPDATE orders SET status = 'shipped' WHERE order_id = 1; -- ERROR: must go through processingUPDATE orders SET status = 'processing' WHERE order_id = 1; -- OKUPDATE orders SET status = 'shipped' WHERE order_id = 1; -- OKUPDATE orders SET status = 'pending' WHERE order_id = 1; -- ERROR: can't go backwards -- State transition audit loggingCREATE TABLE order_status_history ( history_id SERIAL PRIMARY KEY, order_id INT NOT NULL REFERENCES orders(order_id), old_status order_status, new_status order_status NOT NULL, changed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, changed_by INT -- User who made the change); CREATE OR REPLACE FUNCTION log_status_change()RETURNS TRIGGER AS $$BEGIN INSERT INTO order_status_history (order_id, old_status, new_status) VALUES (NEW.order_id, OLD.status, NEW.status); RETURN NEW;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_log_status_changeAFTER UPDATE ON ordersFOR EACH ROWWHEN (OLD.status IS DISTINCT FROM NEW.status)EXECUTE FUNCTION log_status_change();Using a lookup table for valid transitions makes the state machine data-driven and auditable. Adding a new valid transition requires only an INSERT, not a trigger modification. This separation of rules from enforcement logic improves maintainability significantly.
Some constraints involve aggregated data: account balances, running totals, counts, or averages. These require careful design to balance consistency guarantees with performance.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139
-- SCENARIO: Account balance must equal sum of transactions-- and must never go negative CREATE TABLE accounts ( account_id SERIAL PRIMARY KEY, account_name VARCHAR(100) NOT NULL, balance DECIMAL(12,2) NOT NULL DEFAULT 0, CONSTRAINT chk_non_negative_balance CHECK (balance >= 0)); CREATE TABLE transactions ( txn_id SERIAL PRIMARY KEY, account_id INT NOT NULL REFERENCES accounts(account_id), txn_type VARCHAR(10) NOT NULL CHECK (txn_type IN ('credit', 'debit')), amount DECIMAL(12,2) NOT NULL CHECK (amount > 0), description VARCHAR(200), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP); -- Trigger to maintain balance and check constraintsCREATE OR REPLACE FUNCTION apply_transaction()RETURNS TRIGGER AS $$DECLARE delta DECIMAL(12,2); new_balance DECIMAL(12,2);BEGIN -- Calculate the change IF NEW.txn_type = 'credit' THEN delta := NEW.amount; ELSE delta := -NEW.amount; END IF; -- Update balance atomically and get new value UPDATE accounts SET balance = balance + delta WHERE account_id = NEW.account_id RETURNING balance INTO new_balance; -- Check non-negative (redundant with CHECK but provides better error) IF new_balance < 0 THEN RAISE EXCEPTION 'Insufficient funds: balance would be %', new_balance; END IF; RETURN NEW;END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_apply_transactionAFTER INSERT ON transactionsFOR EACH ROWEXECUTE FUNCTION apply_transaction(); -- Prevent transaction deletion or modification (audit trail)CREATE OR REPLACE FUNCTION prevent_txn_modification()RETURNS TRIGGER AS $$BEGIN RAISE EXCEPTION 'Transactions cannot be modified or deleted. Create a reversal transaction instead.';END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_prevent_txn_modificationBEFORE UPDATE OR DELETE ON transactionsFOR EACH ROWEXECUTE FUNCTION prevent_txn_modification(); -- SCENARIO 2: Enforce minimum/maximum relationship counts-- Department must have at least one employee (cardinality constraint) -- Note: This is tricky because of chicken-and-egg problems-- Solution: Allow temporary violation, check on commit (deferred constraint)-- Or: Use application layer to manage in transaction -- Alternative: Soft constraint with validation procedureCREATE OR REPLACE FUNCTION validate_department_has_employees()RETURNS TABLE(dept_id INT, dept_name VARCHAR, employee_count BIGINT) AS $$BEGIN RETURN QUERY SELECT d.dept_id, d.dept_name, COUNT(e.emp_id) as employee_count FROM departments d LEFT JOIN employees e ON d.dept_id = e.dept_id GROUP BY d.dept_id, d.dept_name HAVING COUNT(e.emp_id) = 0;END;$$ LANGUAGE plpgsql; -- Run periodically or on demandSELECT * FROM validate_department_has_employees(); -- SCENARIO 3: Calculated field must match computation-- Product's average_rating must equal AVG of reviews CREATE TABLE products ( product_id SERIAL PRIMARY KEY, product_name VARCHAR(200) NOT NULL, average_rating DECIMAL(3,2), review_count INT DEFAULT 0); CREATE TABLE product_reviews ( review_id SERIAL PRIMARY KEY, product_id INT NOT NULL REFERENCES products(product_id), rating INT NOT NULL CHECK (rating >= 1 AND rating <= 5), review_text TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP); CREATE OR REPLACE FUNCTION update_product_rating()RETURNS TRIGGER AS $$DECLARE target_product_id INT;BEGIN target_product_id := COALESCE(NEW.product_id, OLD.product_id); UPDATE products SET average_rating = ( SELECT ROUND(AVG(rating)::numeric, 2) FROM product_reviews WHERE product_id = target_product_id ), review_count = ( SELECT COUNT(*) FROM product_reviews WHERE product_id = target_product_id ) WHERE product_id = target_product_id; RETURN COALESCE(NEW, OLD);END;$$ LANGUAGE plpgsql; CREATE TRIGGER trg_update_product_ratingAFTER INSERT OR UPDATE OR DELETE ON product_reviewsFOR EACH ROWEXECUTE FUNCTION update_product_rating();Triggers that recalculate aggregates on every row change can be expensive for high-volume tables. Consider: (1) Caching counts/sums and updating incrementally rather than recalculating, (2) Batching updates for high-frequency changes, (3) Using materialized views refreshed periodically for non-critical aggregates.
Where should semantic constraints be enforced? This is a critical architectural decision with implications for data integrity, system complexity, and performance.
| Layer | Mechanisms | Strengths | Weaknesses |
|---|---|---|---|
| Database (Declarative) | CHECK, UNIQUE, FK, EXCLUDE | Always enforced; cannot be bypassed | Limited expressiveness; SQL-only logic |
| Database (Procedural) | Triggers, stored procedures | Enforced for all data access; complex logic | Harder to test; vendor-specific; can impact performance |
| Application (Service) | Business logic code, ORM validations | Full language power; testable; portable | Can be bypassed by direct DB access; duplicated across apps |
| Application (API) | Request validation, schema validation | User-friendly errors; input sanitization | Only covers one entry point; easily bypassed internally |
| UI | Form validation, client-side checks | Immediate feedback; UX improvement | Trivially bypassed; never trust alone |
The defense-in-depth principle:
Robust systems implement constraints at multiple layers:
The database layer is the last line of defense. Even if all other layers fail, data integrity is preserved. Application and UI layers provide better user experience and catch errors earlier, but they can be bypassed. Database constraints cannot.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
// Example: Layered constraint enforcement // ===== UI LAYER =====// React form validation (immediate feedback, not trusted)const validateOrderForm = (order: OrderInput): ValidationError[] => { const errors: ValidationError[] = []; if (order.quantity <= 0) { errors.push({ field: 'quantity', message: 'Quantity must be positive' }); } if (order.shippingDate && order.shippingDate < order.orderDate) { errors.push({ field: 'shippingDate', message: 'Shipping date must be after order date' }); } return errors;}; // ===== API LAYER =====// Express middleware (input sanitization, not trusted for business logic)const validateOrderRequest = (req, res, next) => { const schema = Joi.object({ customerId: Joi.number().integer().positive().required(), items: Joi.array().min(1).items( Joi.object({ productId: Joi.number().integer().positive().required(), quantity: Joi.number().integer().min(1).required() }) ).required() }); const { error } = schema.validate(req.body); if (error) return res.status(400).json({ error: error.details }); next();}; // ===== APPLICATION LAYER =====// Service layer (business rules, primary logic location)class OrderService { async createOrder(input: CreateOrderInput): Promise<Order> { // Business rule: Customer must be active const customer = await this.customerRepo.findById(input.customerId); if (customer.status !== 'active') { throw new BusinessRuleError('Cannot create order for inactive customer'); } // Business rule: Check inventory for all items for (const item of input.items) { const available = await this.inventoryService.getAvailable(item.productId); if (available < item.quantity) { throw new BusinessRuleError(`Insufficient inventory for product ${item.productId}`); } } // Application logic: Calculate totals, apply pricing rules const calculatedOrder = await this.calculateOrder(input); // Persist (database constraints are final check) return await this.orderRepo.create(calculatedOrder); }} // ===== DATABASE LAYER =====// Constraints defined in schema (absolute enforcement)/* - PRIMARY KEY on order_id (entity integrity) - FOREIGN KEY customer_id → customers (referential integrity) - FOREIGN KEY product_id → products (referential integrity) - CHECK quantity > 0 (domain constraint) - CHECK total_amount >= 0 (domain constraint) - CHECK shipping_date IS NULL OR shipping_date >= order_date (semantic) - Trigger: update order total when items change (multi-table) - Trigger: prevent negative inventory (multi-table)*/When application and database constraints disagree, the database wins. If the database accepts data, it's in the database—regardless of what the application intended. This is why critical constraints belong in the database: they are the source of truth, not the application.
Semantic constraints give meaning to data, ensuring it represents valid real-world states. Let's consolidate the essential knowledge:
Module conclusion:
With semantic constraints, we complete the five pillars of relational integrity:
Together, these constraints transform a database from a passive data container into an active guardian of data quality—rejecting invalid data before it can corrupt your systems, your decisions, and your business.
Congratulations! You have mastered the complete spectrum of integrity constraints in the relational model. From the fundamental entity and referential integrity rules through domain and key constraints to complex semantic business rules, you now understand how to design databases that actively prevent data corruption and maintain meaningful, reliable data.