Loading learning content...
The SQL standard provides a solid foundation of data types, but real-world applications often demand more. How do you store a JSON configuration document? A geographic coordinate? A network IP address? A universally unique identifier?
Major database vendors have extended SQL with specialized types that address modern application needs. These extensions offer significant advantages—native JSON querying, efficient array storage, specialized indexing—but come with portability trade-offs.
This page surveys the most important vendor-specific types, their use cases, and the considerations for choosing between native types and standard SQL alternatives.
By the end of this page, you will understand JSON and JSONB types for document storage, array types for ordered collections, UUID for unique identifiers, geometric and geospatial types, network address types, and binary data types—along with guidance on when to use each.
Native JSON support allows storing semi-structured data within relational tables—bridging the gap between rigid relational schemas and flexible document models.
JSON Type Variants:
| Database | Type | Storage | Indexing | Key Features |
|---|---|---|---|---|
| PostgreSQL | JSON | Text (re-parsed) | Limited | Preserves whitespace, order |
| PostgreSQL | JSONB | Binary (pre-parsed) | GIN indexes | Faster queries, no duplicates |
| MySQL | JSON | Binary | Functional indexes | Partial update support |
| SQL Server | NVARCHAR + functions | Text | Computed columns | Functions, not type |
| Oracle | JSON (21c+) | Binary | Full text search | Native type since 21c |
When to Use JSON:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091
-- PostgreSQL JSON and JSONB examples -- JSON vs JSONBCREATE TABLE json_demo ( id SERIAL PRIMARY KEY, data_json JSON, -- Stores as text, preserves formatting data_jsonb JSONB -- Stores as binary, optimized for queries); INSERT INTO json_demo (data_json, data_jsonb) VALUES ( '{"name": "Alice", "tags": ["admin", "verified"], "settings": {"theme": "dark"}}', '{"name": "Alice", "tags": ["admin", "verified"], "settings": {"theme": "dark"}}'); -- JSONB removes duplicate keys and doesn't preserve order-- JSON preserves exact input -- JSON/JSONB column in real tableCREATE TABLE products_flexible ( product_id SERIAL PRIMARY KEY, product_name VARCHAR(200) NOT NULL, base_price DECIMAL(10, 2) NOT NULL, attributes JSONB DEFAULT '{}' -- Flexible attributes); INSERT INTO products_flexible (product_name, base_price, attributes) VALUES('Laptop Pro', 1299.99, '{ "brand": "TechCorp", "specs": { "cpu": "Intel i7", "ram_gb": 16, "storage_gb": 512 }, "colors": ["silver", "black"], "wireless": true}'),('Running Shoes', 129.99, '{ "brand": "SportMax", "sizes": [7, 8, 9, 10, 11], "material": "mesh", "waterproof": false}'); -- Accessing JSON valuesSELECT product_name, attributes->>'brand' AS brand, -- Get as text attributes->'specs'->>'cpu' AS cpu, -- Nested access attributes->'specs'->'ram_gb' AS ram, -- Get as JSON (attributes->'specs'->>'ram_gb')::int AS ram_int -- Cast to intFROM products_flexible; -- Filtering on JSON fieldsSELECT * FROM products_flexibleWHERE attributes->>'brand' = 'TechCorp'; SELECT * FROM products_flexibleWHERE (attributes->'specs'->>'ram_gb')::int >= 16; -- Check for key existenceSELECT * FROM products_flexibleWHERE attributes ? 'waterproof'; -- Has key 'waterproof' SELECT * FROM products_flexibleWHERE attributes @> '{"wireless": true}'; -- Contains this JSON -- Array operations in JSONBSELECT * FROM products_flexibleWHERE attributes->'colors' ? 'silver'; -- Array contains 'silver' -- GIN index for JSONB (enables fast @>, ?, ?& operators)CREATE INDEX idx_product_attrs ON products_flexible USING GIN (attributes); -- Partial index on specific JSON pathCREATE INDEX idx_brand ON products_flexible ((attributes->>'brand')); -- Update JSON valuesUPDATE products_flexibleSET attributes = jsonb_set(attributes, '{specs, ram_gb}', '32')WHERE product_id = 1; -- Add new keyUPDATE products_flexibleSET attributes = attributes || '{"on_sale": true}'WHERE product_id = 1; -- Remove keyUPDATE products_flexibleSET attributes = attributes - 'on_sale'WHERE product_id = 1;JSON provides flexibility but sacrifices schema enforcement, type safety, and some query optimization. Use JSON for truly variable attributes, not to avoid designing a proper schema. If you find yourself querying the same JSON paths repeatedly, consider promoting those to regular columns.
Array types allow storing ordered collections of values in a single column—denormalizing one-to-many relationships for specific use cases. PostgreSQL has the most mature array support; MySQL recently added JSON-based arrays.
When Arrays Shine:
When Arrays Are Wrong:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081
-- PostgreSQL array examples -- Array column declarationCREATE TABLE articles ( article_id SERIAL PRIMARY KEY, title VARCHAR(200) NOT NULL, content TEXT NOT NULL, tags TEXT[] NOT NULL DEFAULT '{}', -- Array of text ratings INTEGER[] DEFAULT NULL -- Array of integers); -- Array literals for insertionINSERT INTO articles (title, content, tags, ratings) VALUES('PostgreSQL Tips', 'Content here...', ARRAY['database', 'postgresql', 'sql'], ARRAY[5, 4, 5, 3]),('Web Development', 'Content here...', '{"javascript", "web", "frontend"}', -- Alternative syntax ARRAY[4, 4, 5]); -- Array access (1-indexed!)SELECT title, tags[1] AS first_tag, -- First element tags[2:3] AS slice, -- Slice [2, 3] array_length(tags, 1) AS tag_count -- LengthFROM articles; -- Array operatorsSELECT * FROM articles WHERE 'postgresql' = ANY(tags); -- Contains elementSELECT * FROM articles WHERE tags @> ARRAY['database']; -- Contains allSELECT * FROM articles WHERE tags && ARRAY['web', 'api']; -- Overlaps (any match)SELECT * FROM articles WHERE tags <@ ARRAY['database', 'sql', 'postgresql', 'tips']; -- Contained by -- Array functionsSELECT title, array_to_string(tags, ', ') AS tags_string, -- Join with delimiter unnest(tags) AS individual_tags -- Expand to rowsFROM articles; -- Unnest to work with individual elementsSELECT a.title, t.tagFROM articles a, unnest(a.tags) AS t(tag)WHERE t.tag LIKE 'post%'; -- Array aggregation (collect values into array)SELECT category, array_agg(product_name ORDER BY product_name) AS productsFROM productsGROUP BY category; -- Array modificationUPDATE articlesSET tags = array_append(tags, 'featured')WHERE article_id = 1; UPDATE articlesSET tags = array_remove(tags, 'database')WHERE article_id = 1; UPDATE articlesSET tags = array_cat(tags, ARRAY['new1', 'new2']) -- ConcatenateWHERE article_id = 1; -- GIN index for array containment queriesCREATE INDEX idx_article_tags ON articles USING GIN (tags); -- Now @>, &&, and <@ operators use the index -- Multi-dimensional arrays (less common)CREATE TABLE matrix_data ( id SERIAL PRIMARY KEY, matrix INTEGER[][] -- 2D array); INSERT INTO matrix_data (matrix) VALUES (ARRAY[[1, 2, 3], [4, 5, 6]]); -- 2x3 matrix SELECT matrix[1][2] FROM matrix_data; -- Element at row 1, col 2For a tagging system, an array column (tags TEXT[]) is simpler and faster for read-heavy workloads. A junction table (article_tags) is better when tags have metadata (created_at, added_by), when you need tag statistics across articles, or when tags are managed entities themselves.
UUID (Universally Unique Identifier) is a 128-bit identifier designed to be unique across space and time without coordination. UUIDs are increasingly popular for distributed systems, APIs, and modern applications.
UUID Format: 550e8400-e29b-41d4-a716-446655440000
UUID Versions:
| Aspect | UUID | AUTO_INCREMENT INT |
|---|---|---|
| Uniqueness scope | Global (across databases) | Local (single table) |
| Storage | 16 bytes | 4-8 bytes |
| Index size | Larger | Smaller |
| Generation | Application or DB, no coordination | Database sequence |
| Predictability | Unpredictable (v4) | Sequential, predictable |
| URL exposure | Safe (no information leak) | Reveals count/order |
| Distributed systems | Excellent | Problematic |
| B-tree locality | Poor (random) | Excellent (sequential) |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
-- PostgreSQL UUID examples -- Enable UUID extension (required for generation functions)CREATE EXTENSION IF NOT EXISTS "uuid-ossp";-- Or use newer gen_random_uuid() from pgcrypto -- UUID columnCREATE TABLE users_uuid ( user_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), username VARCHAR(50) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE); -- Insertion with auto-generated UUIDINSERT INTO users_uuid (username, email)VALUES ('alice', 'alice@example.com'); -- Insertion with explicit UUIDINSERT INTO users_uuid (user_id, username, email)VALUES ('550e8400-e29b-41d4-a716-446655440000', 'bob', 'bob@example.com'); -- UUID generation functionsSELECT gen_random_uuid() AS random_uuid, -- v4, recommended uuid_generate_v4() AS v4_uuid, -- v4, from extension uuid_generate_v1() AS v1_uuid, -- v1 (timestamp + MAC) uuid_generate_v5(uuid_ns_url(), 'example.com') AS v5_uuid; -- Deterministic -- UUID in foreign keysCREATE TABLE orders_uuid ( order_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), customer_id UUID NOT NULL REFERENCES users_uuid(user_id), order_date DATE NOT NULL DEFAULT CURRENT_DATE); -- Querying by UUIDSELECT * FROM users_uuid WHERE user_id = '550e8400-e29b-41d4-a716-446655440000'; -- UUID comparison (lexicographic on bytes)SELECT * FROM users_uuid ORDER BY user_id; -- Not temporal order for v4! -- MySQL UUID-- MySQL has UUID functions but stores as CHAR(36) or BINARY(16)-- CREATE TABLE mysql_uuid (-- id BINARY(16) PRIMARY KEY DEFAULT (UUID_TO_BIN(UUID(), 1)),-- ...-- );-- UUID_TO_BIN with swap_flag=1 improves index locality -- SQL Server-- Uses UNIQUEIDENTIFIER type (16 bytes)-- NEWID() generates random, NEWSEQUENTIALID() generates sequential -- Performance consideration: UUID v4 causes random index insertions-- Solutions:-- 1. Use UUID v7 (timestamp-prefixed, sortable) - emerging standard-- 2. Use ULID (Universally Unique Lexicographically Sortable Identifier)-- 3. Accept the overhead for the benefits (usually acceptable)UUID v7 (proposed standard) combines a timestamp prefix with random bits, providing global uniqueness like v4 but with lexicographic sortability like sequential integers. This reduces B-tree fragmentation while maintaining distributed generation capabilities. Consider adopting v7 for new systems.
PostgreSQL provides specialized types for network addresses that offer validation, efficient storage, and specialized operators. These are invaluable for applications dealing with IP addresses, CIDR ranges, and MAC addresses.
PostgreSQL Network Types:
| Type | Description | Example | Storage |
|---|---|---|---|
| INET | IP address (v4 or v6) | 192.168.1.1/24 | 7-19 bytes |
| CIDR | Network address (strict) | 192.168.1.0/24 | 7-19 bytes |
| MACADDR | MAC address | 08:00:2b:01:02:03 | 6 bytes |
| MACADDR8 | EUI-64 MAC | 08:00:2b:ff:fe:01:02:03 | 8 bytes |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
-- PostgreSQL network type examples CREATE TABLE network_logs ( log_id SERIAL PRIMARY KEY, client_ip INET NOT NULL, server_ip INET NOT NULL, client_mac MACADDR, logged_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()); CREATE TABLE ip_allowlist ( allowlist_id SERIAL PRIMARY KEY, network CIDR NOT NULL, -- Network range description VARCHAR(200)); -- Inserting network dataINSERT INTO network_logs (client_ip, server_ip, client_mac)VALUES ('192.168.1.100', '10.0.0.5', '08:00:2b:01:02:03'), ('2001:db8::1', '2001:db8::2', NULL), -- IPv6 ('192.168.1.150/32', '10.0.0.5', '08:00:2b:01:02:04'); INSERT INTO ip_allowlist (network, description)VALUES ('192.168.1.0/24', 'Office network'), ('10.0.0.0/8', 'Internal networks'), ('2001:db8::/32', 'IPv6 documentation range'); -- Containment operatorsSELECT * FROM network_logsWHERE client_ip << '192.168.0.0/16'; -- Contained in network SELECT * FROM network_logsWHERE client_ip <<= '192.168.1.0/24'; -- Contained in or equal -- Check if IP is in allowlistSELECT l.*, a.descriptionFROM network_logs lJOIN ip_allowlist a ON l.client_ip << a.network; -- Network functionsSELECT client_ip, host(client_ip) AS ip_only, -- Strip netmask masklen(client_ip) AS mask_length, -- Get mask length network(client_ip) AS network_addr, -- Network address broadcast(client_ip) AS broadcast_addrFROM network_logs; -- IP address arithmeticSELECT '192.168.1.0/24'::CIDR + 5 AS fifth_ip, -- Add offset '192.168.1.100'::INET - '192.168.1.1'::INET AS difference, -- Difference set_masklen('192.168.1.100'::INET, 24) AS with_mask; -- Sorting IPs (proper numeric order, not lexicographic)SELECT client_ip FROM network_logs ORDER BY client_ip;-- Returns: 10.0.0.5, 192.168.1.100, 192.168.1.150 (correct order)-- VARCHAR would give: 10..., 192.168.1.1..., 192.168.1.5... (wrong!) -- GIST index for network containmentCREATE INDEX idx_allowlist_network ON ip_allowlist USING GIST (network inet_ops); -- MAC address operationsSELECT client_mac, trunc(client_mac) AS manufacturer_id -- First 3 bytes (OUI)FROM network_logsWHERE client_mac IS NOT NULL; -- Validate IP on insert (built-in with type)INSERT INTO network_logs (client_ip, server_ip)VALUES ('999.999.999.999', '10.0.0.1'); -- ERROR: invalid input syntaxStoring IP addresses as VARCHAR seems convenient but loses all benefits: '10.0.0.9' sorts after '10.0.0.10' lexicographically, containment queries require complex string parsing, and invalid IPs aren't rejected. Use native network types for validation, proper sorting, and efficient operators.
Databases increasingly support geometric and geospatial data for location-based applications. PostgreSQL has built-in geometric types; for full GIS capabilities, PostGIS extension is the industry standard.
PostgreSQL Built-in Geometric Types:
| Type | Description | Example |
|---|---|---|
| POINT | (x, y) coordinate | (3.5, 4.2) |
| LINE | Infinite line: Ax + By + C = 0 | {1, -1, 0} |
| LSEG | Line segment | [(0,0), (1,1)] |
| BOX | Rectangle | ((0,0), (1,1)) |
| PATH | Open/closed path | ((0,0), (1,1), (2,0)) |
| POLYGON | Closed polygon | ((0,0), (1,1), (1,0)) |
| CIRCLE | Circle: center + radius | <(0,0), 5> |
PostGIS for Real Geospatial:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586
-- PostgreSQL built-in geometric types CREATE TABLE locations ( location_id SERIAL PRIMARY KEY, name VARCHAR(100) NOT NULL, position POINT NOT NULL -- (longitude, latitude)); CREATE TABLE zones ( zone_id SERIAL PRIMARY KEY, zone_name VARCHAR(100) NOT NULL, boundary POLYGON NOT NULL); -- Insert geometric dataINSERT INTO locations (name, position) VALUES ('Times Square', POINT(-73.9857, 40.7580)), ('Central Park', POINT(-73.9654, 40.7829)), ('Empire State', POINT(-73.9857, 40.7484)); INSERT INTO zones (zone_name, boundary) VALUES ('Service Area', POLYGON '((-74.0, 40.7), (-74.0, 40.8), (-73.9, 40.8), (-73.9, 40.7), (-74.0, 40.7))'); -- Basic geometric operationsSELECT name, position, position[0] AS longitude, -- X coordinate position[1] AS latitude -- Y coordinateFROM locations; -- Distance calculation (Euclidean, not Earth distance!)SELECT l1.name AS from_loc, l2.name AS to_loc, l1.position <-> l2.position AS distance -- <-> is distance operatorFROM locations l1, locations l2WHERE l1.location_id < l2.location_id; -- Containment: point in polygonSELECT * FROM locations l, zones zWHERE l.position <@ z.boundary; -- Point contained in polygon -- GIST index for geometric operationsCREATE INDEX idx_locations_pos ON locations USING GIST (position);CREATE INDEX idx_zones_boundary ON zones USING GIST (boundary); ----------------------------------------------- PostGIS for real geospatial applications--------------------------------------------- -- Enable PostGIS extension-- CREATE EXTENSION postgis; -- GEOGRAPHY type (spherical, uses lat/lon)CREATE TABLE stores ( store_id SERIAL PRIMARY KEY, store_name VARCHAR(100) NOT NULL, location GEOGRAPHY(POINT, 4326) -- SRID 4326 = WGS84 (GPS)); -- Insert as geographyINSERT INTO stores (store_name, location) VALUES ('NYC Store', ST_MakePoint(-73.9857, 40.7580)::geography), ('LA Store', ST_MakePoint(-118.2437, 34.0522)::geography); -- Distance in meters (real Earth distance!)SELECT a.store_name, b.store_name, ST_Distance(a.location, b.location) AS distance_meters, ST_Distance(a.location, b.location) / 1000 AS distance_kmFROM stores a, stores bWHERE a.store_id < b.store_id; -- Find stores within 10km of a pointSELECT store_nameFROM storesWHERE ST_DWithin( location, ST_MakePoint(-73.9857, 40.7580)::geography, 10000 -- 10 km in meters); -- Spatial index for geographyCREATE INDEX idx_stores_location ON stores USING GIST (location);PostgreSQL's built-in geometric types use Euclidean (flat) geometry—fine for floor plans, CAD, or game maps. For real-world latitude/longitude with proper Earth curvature, distances in meters, and map projections, install PostGIS. It's the gold standard for open-source GIS.
Binary data types store raw bytes—files, images, encrypted data, serialized objects. Every major database has binary large object (BLOB) support, though naming varies.
Binary Types by Database:
| Database | Type | Maximum Size | Notes |
|---|---|---|---|
| PostgreSQL | BYTEA | 1 GB | Hex or escape encoding |
| MySQL | BLOB | 64 KB | Also TINYBLOB, MEDIUMBLOB, LONGBLOB |
| MySQL | BINARY(n) / VARBINARY(n) | 65,535 bytes | Fixed/variable length |
| SQL Server | VARBINARY(MAX) | 2 GB | Replaces deprecated IMAGE |
| Oracle | BLOB | 4 GB | Large object |
When to Store Binary in Database:
When to Use File Storage Instead:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081
-- PostgreSQL BYTEA examples CREATE TABLE attachments ( attachment_id SERIAL PRIMARY KEY, file_name VARCHAR(255) NOT NULL, mime_type VARCHAR(100) NOT NULL, file_size INTEGER NOT NULL, file_data BYTEA NOT NULL, uploaded_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()); -- Insert binary data (hex format)INSERT INTO attachments (file_name, mime_type, file_size, file_data)VALUES ('icon.png', 'image/png', 100, '\x89504e470d0a1a0a...'); -- Insert from application (parameterized)-- In application code:-- cursor.execute("INSERT INTO attachments ... VALUES (%s, %s, %s, %s)",-- [filename, mimetype, len(data), psycopg2.Binary(data)]) -- Retrieve binary dataSELECT file_name, mime_type, file_size, file_dataFROM attachmentsWHERE attachment_id = 1; -- Get metadata without loading binarySELECT file_name, mime_type, file_size, uploaded_at, length(file_data) AS actual_size -- Verify sizeFROM attachments; -- Hash of binary content (for deduplication)CREATE TABLE unique_files ( file_hash BYTEA PRIMARY KEY, -- SHA-256 hash file_data BYTEA NOT NULL, reference_count INTEGER DEFAULT 1); INSERT INTO unique_files (file_hash, file_data)VALUES (sha256('file content'), 'file content'); -- Encryption patterns-- Store encrypted data, key managed separatelyCREATE TABLE encrypted_data ( data_id SERIAL PRIMARY KEY, encrypted_value BYTEA NOT NULL, iv BYTEA NOT NULL, -- Initialization vector key_id VARCHAR(50) NOT NULL -- Reference to key management); -- PostgreSQL pgcrypto extension for encryption-- CREATE EXTENSION pgcrypto;-- INSERT INTO encrypted_data (encrypted_value, iv, key_id)-- VALUES (-- pgp_sym_encrypt('sensitive data', 'encryption_key'),-- gen_random_bytes(16),-- 'key_2024_01'-- ); -- MySQL BLOB types-- TINYBLOB: 255 bytes-- BLOB: 64 KB-- MEDIUMBLOB: 16 MB-- LONGBLOB: 4 GB -- SQL Server-- VARBINARY(n): Up to 8000 bytes-- VARBINARY(MAX): Up to 2 GB-- With FILESTREAM: Store in filesystem with DB transaction support -- File reference pattern (recommended for large files)CREATE TABLE documents ( document_id SERIAL PRIMARY KEY, file_name VARCHAR(255) NOT NULL, storage_path VARCHAR(500) NOT NULL, -- S3 key or filesystem path storage_bucket VARCHAR(100), -- S3 bucket file_hash BYTEA, -- For integrity verification mime_type VARCHAR(100) NOT NULL, file_size BIGINT NOT NULL, uploaded_at TIMESTAMP WITH TIME ZONE DEFAULT NOW());-- Store metadata in DB, file in object storageStoring large binary files in the database dramatically increases database size, backup time, and replication traffic. For files over a few hundred KB, consider storing only metadata in the database with the actual file in object storage (S3, GCS, Azure Blob). Store a reference path and hash for integrity.
Database vendors provide additional specialized types for specific domains. Here's a survey of notable types across major databases:
PostgreSQL Range Types represent ranges of values—date ranges, number ranges—with built-in operators for overlap, containment, and adjacency.
12345678910111213141516171819202122
-- Range type examples (PostgreSQL)CREATE TABLE reservations ( reservation_id SERIAL PRIMARY KEY, room_id INTEGER NOT NULL, guest_name VARCHAR(100), stay_period DATERANGE NOT NULL -- [check_in, check_out)); INSERT INTO reservations (room_id, guest_name, stay_period)VALUES (101, 'Alice', '[2024-03-15, 2024-03-20)'), (101, 'Bob', '[2024-03-22, 2024-03-25)'); -- Range operatorsSELECT * FROM reservationsWHERE stay_period && '[2024-03-18, 2024-03-23)'; -- Overlaps -- Prevent overlapping reservationsALTER TABLE reservations ADD CONSTRAINT no_overlap EXCLUDE USING GIST (room_id WITH =, stay_period WITH &&); -- Other range types: INT4RANGE, INT8RANGE, NUMRANGE, TSRANGE, TSTZRANGEVendor-specific types offer powerful capabilities but require careful consideration of portability, maintenance, and appropriate use cases. Here are the guiding principles:
| Use Case | Recommended Type | Alternative |
|---|---|---|
| Semi-structured data | JSONB (PostgreSQL) | TEXT with JSON functions |
| Tags/labels | TEXT[] array or JSONB | Junction table |
| Global unique ID | UUID (v4 or v7) | BIGINT with careful management |
| IP addresses | INET/CIDR | VARCHAR (loses operators) |
| Location data | PostGIS GEOGRAPHY | Lat/lon as DOUBLE PRECISION |
| Small files | BYTEA/BLOB | Object storage + reference |
| Status codes | ENUM type | VARCHAR with CHECK constraint |
| Date ranges | DATERANGE (PostgreSQL) | Two DATE columns |
You have now completed comprehensive coverage of SQL data types—from fundamental numeric and character types through temporal handling, boolean logic, and vendor-specific extensions. This knowledge forms the foundation for effective schema design. Next module: Constraints in DDL—ensuring data integrity through declarative rules.