Loading content...
While single-row DML operations are conceptually simple, real-world databases often need to process millions of rows. Loading data warehouses, migrating legacy systems, synchronizing distributed databases, generating reports from transactional data—all require bulk operations that complete in minutes, not hours.
The difference between a skilled database developer and a novice often shows in how they handle large-scale data movement. Naive approaches—executing millions of individual INSERT statements—can take days for operations that optimized bulk methods complete in minutes. Understanding bulk operation techniques is essential for any database professional working with substantial data volumes.
By the end of this page, you will master multi-row INSERT techniques, understand COPY and BCP utilities for maximum throughput, implement batched DML for large updates and deletes, configure minimal logging for bulk operations, and apply strategies for handling billions of rows efficiently.
Every individual DML statement incurs overhead: network round-trip, query parsing, execution plan generation, transaction logging, and lock management. At small scale, this overhead is negligible. At large scale, it dominates execution time.
The Performance Gap:
| Approach | Typical Time | Network Round-trips | Parse Operations | Commits |
|---|---|---|---|---|
| 1M individual INSERTs | ~60 minutes | 1,000,000 | 1,000,000 | 1,000,000 |
| 10K batches × 100 rows | ~5 minutes | 10,000 | 10,000 | 10,000 or 1 |
| Single bulk INSERT (100K batches) | ~2 minutes | 10 | 10 | 1 |
| COPY/BCP utility | ~30 seconds | 1 (stream) | 0 | 1 |
Sources of Overhead:
Bulk operations typically run 100-1000 times faster than equivalent row-by-row operations. If your data load is taking hours, bulk optimization is almost always the solution.
The simplest bulk optimization is combining multiple rows into a single INSERT statement. This reduces network round-trips and parse operations proportionally to the batch size.
Multi-Row VALUES Syntax:
12345678910111213141516171819
-- Single statement inserting multiple rowsINSERT INTO products (product_id, product_name, unit_price, category_id)VALUES (1001, 'Wireless Mouse', 29.99, 1), (1002, 'Mechanical Keyboard', 149.99, 1), (1003, 'USB-C Hub', 49.99, 1), (1004, 'Monitor Stand', 79.99, 2), (1005, 'Desk Lamp', 34.99, 2), -- ... potentially hundreds more rows (1100, 'Cable Organizer', 12.99, 2); -- One parse, one execution plan, one commit-- 100 rows inserted with overhead of 1 statement -- Practical batch size: 100-1000 rows per INSERT-- Limited by:-- 1. Maximum statement size (varies by database, often 1-64 MB)-- 2. Maximum parameters (e.g., SQL Server: 2100)-- 3. Memory for parsing large statementsINSERT ... SELECT for Bulk Data Movement:
When source data is already in the database, INSERT ... SELECT avoids all network overhead:
123456789101112131415161718192021222324
-- Copy data between tables (no application round-trip)INSERT INTO orders_archive (order_id, customer_id, order_date, total_amount, archived_at)SELECT order_id, customer_id, order_date, total_amount, CURRENT_TIMESTAMPFROM ordersWHERE order_date < DATE '2023-01-01';-- Moves millions of rows in seconds -- Create summary table from detailed dataINSERT INTO monthly_sales (year_month, category, total_revenue, order_count)SELECT TO_CHAR(order_date, 'YYYY-MM'), p.category, SUM(oi.quantity * oi.unit_price), COUNT(DISTINCT o.order_id)FROM orders oJOIN order_items oi ON o.order_id = oi.order_idJOIN products p ON oi.product_id = p.product_idGROUP BY TO_CHAR(order_date, 'YYYY-MM'), p.category;-- Aggregates and inserts in one operation -- Clone table structure and dataCREATE TABLE products_backup ASSELECT * FROM products;-- Or: INSERT INTO products_backup SELECT * FROM products;12345678910111213141516171819202122232425262728293031323334353637383940
# Application-level batching (Python example)import psycopg2 def bulk_insert_products(products, batch_size=1000): """Insert products in batches for optimal performance.""" conn = psycopg2.connect(database="mydb") cursor = conn.cursor() # Process in batches for i in range(0, len(products), batch_size): batch = products[i:i + batch_size] # Build multi-row INSERT values_list = [] params = [] for idx, product in enumerate(batch): base = idx * 4 # 4 params per row values_list.append(f"(%s, %s, %s, %s)") params.extend([ product['id'], product['name'], product['price'], product['category_id'] ]) sql = f""" INSERT INTO products (product_id, product_name, unit_price, category_id) VALUES {', '.join(values_list)} """ cursor.execute(sql, params) conn.commit() cursor.close() conn.close() # Usage: Insert 100,000 products in ~100 batchesproducts = [{"id": i, "name": f"Product {i}", "price": 9.99, "category_id": 1} for i in range(100000)]bulk_insert_products(products) # Completes in seconds, not minutesBatch sizes of 100-1000 rows typically offer the best balance. Smaller batches don't amortize overhead; larger batches may hit statement limits, consume excess memory, or hold locks too long. Profile your specific workload to find the sweet spot.
For maximum throughput, databases provide specialized bulk loading utilities that bypass normal SQL processing. These utilities stream data directly into tables with minimal logging.
PostgreSQL COPY:
123456789101112131415161718192021222324
-- PostgreSQL COPY: Fastest way to load/unload data -- COPY from file (server-side file path)COPY products (product_id, product_name, unit_price, category_id)FROM '/var/lib/postgresql/data/products.csv'WITH (FORMAT csv, HEADER true, DELIMITER ','); -- COPY from STDIN (stream data from application)COPY products (product_id, product_name, unit_price, category_id)FROM STDIN WITH (FORMAT csv);-- Then send CSV data followed by \. to terminate -- COPY to file (export)COPY (SELECT * FROM products WHERE category_id = 1)TO '/tmp/electronics.csv'WITH (FORMAT csv, HEADER true); -- Client-side \copy (psql command, uses client file paths)\copy products FROM 'C:/data/products.csv' WITH (FORMAT csv, HEADER true) -- COPY with error handling (PostgreSQL 16+)COPY products FROM '/data/products.csv'WITH (FORMAT csv, HEADER true, ON_ERROR 'ignore');-- Skips malformed rows instead of failing entire operationSQL Server BULK INSERT and BCP:
123456789101112131415161718192021222324252627282930
-- SQL Server: BULK INSERT commandBULK INSERT productsFROM 'C:\Data\products.csv'WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '', FIRSTROW = 2, -- Skip header row TABLOCK, -- Table-level lock for speed BATCHSIZE = 100000, -- Commit every 100K rows MAXERRORS = 100 -- Allow up to 100 errors); -- Format file for complex mappingsBULK INSERT productsFROM 'C:\Data\products.dat'WITH ( FORMATFILE = 'C:\Data\products.fmt', ERRORFILE = 'C:\Data\errors.log'); -- BCP utility (command line) - bidirectional-- Export data-- bcp MyDatabase.dbo.products out C:\Data\products.dat -c -T -- Import data -- bcp MyDatabase.dbo.products in C:\Data\products.dat -c -T -b 10000 -- bcp with format file for production loads-- bcp MyDatabase.dbo.products in data.dat -f products.fmt -T -b 50000 -a 65535MySQL LOAD DATA:
123456789101112131415161718192021222324252627282930313233343536
-- MySQL: LOAD DATA INFILE (fastest bulk insert)LOAD DATA INFILE '/var/lib/mysql-files/products.csv'INTO TABLE productsFIELDS TERMINATED BY ',' ENCLOSED BY '"'LINES TERMINATED BY ''IGNORE 1 LINES -- Skip header(product_id, product_name, unit_price, category_id); -- LOAD DATA LOCAL (client-side file)LOAD DATA LOCAL INFILE 'C:/Data/products.csv'INTO TABLE productsFIELDS TERMINATED BY ','LINES TERMINATED BY ''IGNORE 1 LINES; -- With column transformationsLOAD DATA INFILE '/data/products.csv'INTO TABLE productsFIELDS TERMINATED BY ','LINES TERMINATED BY ''IGNORE 1 LINES(product_id, product_name, @price, category_id)SET unit_price = @price * 1.1, -- Apply 10% markup created_at = NOW(); -- Handling duplicatesLOAD DATA INFILE '/data/products.csv'REPLACE -- Update existing rowsINTO TABLE productsFIELDS TERMINATED BY ','LINES TERMINATED BY '';| Database | Utility | Typical Throughput | Key Options |
|---|---|---|---|
| PostgreSQL | COPY | 100K-500K rows/sec | FORMAT, HEADER, DELIMITER, ON_ERROR |
| SQL Server | BULK INSERT / BCP | 50K-200K rows/sec | TABLOCK, BATCHSIZE, MAXERRORS |
| MySQL | LOAD DATA INFILE | 100K-300K rows/sec | REPLACE, IGNORE, SET transformations |
| Oracle | SQL*Loader | 50K-150K rows/sec | DIRECT, PARALLEL, SKIP, ERRORS |
Large UPDATE and DELETE operations present unique challenges: they can lock tables for extended periods, generate enormous transaction logs, and potentially cause replication lag. Batching these operations solves these problems.
Why Batch Large Modifications:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
-- Batched DELETE: Process in chunks with commits between -- SQL Server: Batched delete with row countDECLARE @BatchSize INT = 10000;DECLARE @RowsAffected INT = 1; WHILE @RowsAffected > 0BEGIN DELETE TOP (@BatchSize) FROM order_items WHERE order_id IN ( SELECT order_id FROM orders WHERE order_date < '2020-01-01' ); SET @RowsAffected = @@ROWCOUNT; -- Optional: Brief pause to reduce lock contention WAITFOR DELAY '00:00:01';END -- PostgreSQL: Using ctid for efficient chunkingDO $$DECLARE batch_size INTEGER := 10000; deleted_count INTEGER;BEGIN LOOP DELETE FROM order_items WHERE ctid IN ( SELECT ctid FROM order_items WHERE order_id IN ( SELECT order_id FROM orders WHERE order_date < '2020-01-01' ) LIMIT batch_size ); GET DIAGNOSTICS deleted_count = ROW_COUNT; IF deleted_count = 0 THEN EXIT; END IF; COMMIT; -- Release locks between batches END LOOP;END $$; -- MySQL: LIMIT with DELETEDELIMITER //CREATE PROCEDURE BatchDeleteOldItems()BEGIN DECLARE done INT DEFAULT 0; REPEAT DELETE FROM order_items WHERE order_id IN ( SELECT order_id FROM orders WHERE order_date < '2020-01-01' ) LIMIT 10000; SET done = ROW_COUNT() = 0; -- Commit after each batch COMMIT; UNTIL done END REPEAT;END //DELIMITER ;1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
-- Batched UPDATE: Same principle, process in chunks -- SQL Server: Batched update with progress trackingDECLARE @BatchSize INT = 50000;DECLARE @TotalUpdated INT = 0;DECLARE @RowsAffected INT = 1; WHILE @RowsAffected > 0BEGIN UPDATE TOP (@BatchSize) products SET unit_price = unit_price * 1.10, last_price_update = GETDATE() WHERE category_id = 5 AND last_price_update < DATEADD(YEAR, -1, GETDATE()); SET @RowsAffected = @@ROWCOUNT; SET @TotalUpdated = @TotalUpdated + @RowsAffected; -- Log progress PRINT 'Updated ' + CAST(@TotalUpdated AS VARCHAR) + ' rows so far...'; -- Checkpoint: reduce log pressure CHECKPOINT;END PRINT 'Total updated: ' + CAST(@TotalUpdated AS VARCHAR); -- PostgreSQL: Batched update with RETURNING for trackingDO $$DECLARE batch_size INTEGER := 50000; total_updated INTEGER := 0; updated_count INTEGER;BEGIN LOOP WITH updated AS ( UPDATE products SET unit_price = unit_price * 1.10, last_price_update = NOW() WHERE product_id IN ( SELECT product_id FROM products WHERE category_id = 5 AND last_price_update < NOW() - INTERVAL '1 year' LIMIT batch_size FOR UPDATE SKIP LOCKED -- Avoid lock contention ) RETURNING 1 ) SELECT COUNT(*) INTO updated_count FROM updated; total_updated := total_updated + updated_count; RAISE NOTICE 'Updated % rows so far', total_updated; IF updated_count < batch_size THEN EXIT; END IF; COMMIT; END LOOP; RAISE NOTICE 'Total updated: %', total_updated;END $$;PostgreSQL's FOR UPDATE SKIP LOCKED allows batched operations to proceed even when some rows are locked by other transactions. The batch processes unlocked rows and picks up the rest in subsequent iterations.
For extreme performance, databases offer modes that reduce transaction logging during bulk operations. This sacrifices some recoverability for dramatic speed improvements.
SQL Server: Minimal Logging:
12345678910111213141516171819202122232425262728293031323334353637
-- SQL Server Minimal Logging Requirements:-- 1. Database in SIMPLE or BULK_LOGGED recovery model-- 2. Table has no clustered index (heap) OR using TABLOCK hint-- 3. Specific operations (BULK INSERT, SELECT INTO, INSERT...SELECT) -- Check current recovery modelSELECT name, recovery_model_desc FROM sys.databases WHERE name = DB_NAME(); -- Temporarily switch to BULK_LOGGED for large importALTER DATABASE MyDatabase SET RECOVERY BULK_LOGGED; -- Bulk insert with minimal loggingINSERT INTO new_products WITH (TABLOCK)SELECT * FROM staging_products;-- TABLOCK hint enables minimal logging -- Or use INSERT INTO with heap (no clustered index)BULK INSERT products_heapFROM 'C:\data\products.csv'WITH (TABLOCK, BATCHSIZE = 100000); -- Restore full recovery after bulk operationALTER DATABASE MyDatabase SET RECOVERY FULL; -- Take log backup to establish new recovery chainBACKUP LOG MyDatabase TO DISK = 'C:\backups\MyDatabase_log.bak'; -- Verify minimal logging occurredSELECT operation, COUNT(*) as operation_countFROM fn_dblog(NULL, NULL)WHERE operation IN ('LOP_INSERT_ROWS', 'LOP_FORMAT_PAGE')GROUP BY operation;-- LOP_FORMAT_PAGE indicates minimal logging (page allocations only)PostgreSQL: Unlogged Tables:
12345678910111213141516171819202122232425262728293031
-- PostgreSQL: UNLOGGED tables skip WAL for inserts-- WARNING: Data is NOT crash-safe! For staging/temp data only. -- Create unlogged staging tableCREATE UNLOGGED TABLE staging_products ( product_id INTEGER, product_name VARCHAR(200), unit_price NUMERIC(10, 2), category_id INTEGER); -- COPY into unlogged table (very fast)COPY staging_products FROM '/data/products.csv' WITH (FORMAT csv);-- No WAL written = much faster -- Transform and move to logged tableINSERT INTO productsSELECT * FROM staging_productsWHERE unit_price > 0; -- Clean up stagingDROP TABLE staging_products; -- Alternative: Make table temporarily unlogged-- (PostgreSQL doesn't support this, but pattern for MySQL) -- Check if table is unloggedSELECT relname, relpersistenceFROM pg_classWHERE relname = 'staging_products';-- 'u' = unlogged, 'p' = permanent (logged), 't' = temporaryMySQL: Disabling Logging Temporarily:
123456789101112131415161718192021222324252627282930313233
-- MySQL: Performance optimizations for bulk inserts -- 1. Disable key checking temporarilySET FOREIGN_KEY_CHECKS = 0;SET UNIQUE_CHECKS = 0; -- 2. Disable autocommitSET autocommit = 0; -- Perform bulk insertsLOAD DATA INFILE '/data/products.csv'INTO TABLE productsFIELDS TERMINATED BY ','LINES TERMINATED BY ''; -- 3. Re-enable and commitSET FOREIGN_KEY_CHECKS = 1;SET UNIQUE_CHECKS = 1;COMMIT;SET autocommit = 1; -- 4. For InnoDB: Adjust buffer pool and log settings-- (in my.cnf, before bulk operation)-- innodb_buffer_pool_size = 4G-- innodb_log_file_size = 256M-- innodb_flush_log_at_trx_commit = 2 -- Less durable, faster -- 5. Disable binary logging for session (careful!)SET sql_log_bin = 0;-- WARNING: Breaks replication! Only for initial loadsLOAD DATA INFILE '/data/massive_data.csv' INTO TABLE target;SET sql_log_bin = 1;Minimal logging sacrifices point-in-time recovery. After a minimally logged operation, you can only restore to the backup before or after the operation, not to any point during it. Always take a full backup immediately after large bulk operations in production.
Indexes and constraints add overhead during bulk operations. For large loads, temporarily removing them can provide dramatic speedups.
Index Considerations:
1234567891011121314151617181920212223242526
-- Pattern: Drop indexes → Load data → Recreate indexes -- 1. Script out existing indexes-- (Use database tools or catalog queries to capture index definitions) SELECT 'CREATE INDEX ' || indexname || ' ON ' || tablename || ' (' || indexdef || ');' as create_scriptFROM pg_indexesWHERE tablename = 'products' AND indexname != 'products_pkey'; -- 2. Drop non-primary indexesDROP INDEX idx_products_category;DROP INDEX idx_products_name;DROP INDEX idx_products_supplier; -- 3. Perform bulk load (much faster without indexes)COPY products FROM '/data/massive_products.csv' WITH (FORMAT csv); -- 4. Recreate indexes (can parallelize)CREATE INDEX CONCURRENTLY idx_products_category ON products(category_id);CREATE INDEX CONCURRENTLY idx_products_name ON products(product_name);CREATE INDEX CONCURRENTLY idx_products_supplier ON products(supplier_id); -- CONCURRENTLY: Allows reads/writes during index creation (PostgreSQL)-- Without CONCURRENTLY: Faster creation but table is lockedConstraint Management:
1234567891011121314151617181920212223242526272829303132333435363738394041
-- Temporarily disable constraints for bulk load -- SQL Server: Disable and re-enable constraintsALTER TABLE order_items NOCHECK CONSTRAINT FK_OrderItems_Products;ALTER TABLE order_items NOCHECK CONSTRAINT FK_OrderItems_Orders; -- Bulk loadBULK INSERT order_items FROM 'C:\data\order_items.csv' WITH (TABLOCK); -- Re-enable and validateALTER TABLE order_items WITH CHECK CHECK CONSTRAINT FK_OrderItems_Products;ALTER TABLE order_items WITH CHECK CHECK CONSTRAINT FK_OrderItems_Orders;-- WITH CHECK: Validates existing data against constraint -- PostgreSQL: Defer constraint checking-- Constraints must be defined as DEFERRABLEALTER TABLE order_items ADD CONSTRAINT fk_order_items_orders FOREIGN KEY (order_id) REFERENCES orders(order_id) DEFERRABLE INITIALLY DEFERRED; -- Constraints are checked at COMMIT, not per-statementBEGIN; -- Insert child rows before parent rows (normally invalid) INSERT INTO order_items VALUES (1, 9999, 101, 5, 9.99); -- Parent order_id 9999 doesn't exist yet... INSERT INTO orders VALUES (9999, 42, '2024-01-15'); -- Now parent exists, constraint satisfied at COMMITCOMMIT; -- MySQL: Temporarily disable foreign key checksSET FOREIGN_KEY_CHECKS = 0;LOAD DATA INFILE '/data/order_items.csv' INTO TABLE order_items;SET FOREIGN_KEY_CHECKS = 1; -- Validate data integrity after re-enablingSELECT oi.* FROM order_items oiLEFT JOIN products p ON oi.product_id = p.product_idWHERE p.product_id IS NULL;-- Should return 0 rows if data is validDisabling constraints bypasses data integrity checks. After bulk loading, always validate that the data satisfies constraints before re-enabling them. Corrupt data with re-enabled constraints causes application failures.
Modern databases can parallelize bulk operations across multiple CPU cores or split work across table partitions. These techniques scale with hardware, enabling even better throughput.
Parallel Bulk Operations:
1234567891011121314151617181920212223
-- PostgreSQL: Parallel COPY (since 14, for COPY TO)-- COPY TO parallelizes across workersCOPY (SELECT * FROM products) TO PROGRAM 'split -l 1000000' WITH (FORMAT csv); -- For parallel INSERT, use pg_partman or custom partitioning-- Each partition can be loaded independently in parallel -- SQL Server: Parallel INSERT with hintsINSERT INTO products WITH (TABLOCK) SELECT * FROM staging_productsOPTION (MAXDOP 8); -- Use up to 8 parallel threads -- Oracle: Parallel DMLALTER SESSION ENABLE PARALLEL DML; INSERT /*+ PARALLEL(products, 8) */ INTO productsSELECT /*+ PARALLEL(staging, 8) */ * FROM staging_products; -- Parallel direct path insertINSERT /*+ APPEND PARALLEL(products, 8) */ INTO productsSELECT * FROM staging_products; ALTER SESSION DISABLE PARALLEL DML;Partition-Aware Bulk Loading:
1234567891011121314151617181920212223242526272829303132333435363738394041
-- Load directly into specific partitions-- Avoids partition routing overhead -- PostgreSQL: Partitioned tableCREATE TABLE orders ( order_id BIGINT, order_date DATE, customer_id INTEGER, total_amount NUMERIC(12, 2)) PARTITION BY RANGE (order_date); CREATE TABLE orders_2024_q1 PARTITION OF orders FOR VALUES FROM ('2024-01-01') TO ('2024-04-01'); CREATE TABLE orders_2024_q2 PARTITION OF orders FOR VALUES FROM ('2024-04-01') TO ('2024-07-01'); -- Load directly into partition (bypasses routing)COPY orders_2024_q1 FROM '/data/orders_2024_q1.csv' WITH (FORMAT csv);COPY orders_2024_q2 FROM '/data/orders_2024_q2.csv' WITH (FORMAT csv); -- Parallel loading: Load each partition concurrently-- Run in separate connections/sessions:-- Session 1: COPY orders_2024_q1 FROM '/data/q1.csv'...-- Session 2: COPY orders_2024_q2 FROM '/data/q2.csv'... -- SQL Server: Partition switching for instant loads-- Load into staging table with same structureBULK INSERT staging_orders_q1 FROM 'C:\data\q1.csv' WITH (TABLOCK); -- Create matching indexes and constraints on staging-- Then switch partition (instant, metadata-only)ALTER TABLE staging_orders_q1 SWITCH TO orders PARTITION 1; -- MySQL: LOAD DATA with partitioningLOAD DATA INFILE '/data/orders_2024_q1.csv'INTO TABLE ordersPARTITION (p_2024_q1) -- Specify target partitionFIELDS TERMINATED BY ','LINES TERMINATED BY '';SQL Server's partition switching is a metadata-only operation that completes instantly regardless of data size. Load billions of rows into a staging table offline, then switch the partition into the production table in milliseconds with no downtime.
Long-running bulk operations need monitoring to track progress, identify bottlenecks, and ensure successful completion.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
-- PostgreSQL: Monitor running COPYSELECT pid, query_start, NOW() - query_start AS duration, state, queryFROM pg_stat_activityWHERE query ILIKE '%COPY%' AND state = 'active'; -- Check table size growth during loadSELECT relname, pg_size_pretty(pg_relation_size(oid)) AS size, n_live_tup AS row_estimateFROM pg_stat_user_tablesWHERE relname = 'products'; -- SQL Server: Monitor bulk operation progressSELECT r.session_id, r.command, r.percent_complete, r.estimated_completion_time / 1000 / 60 AS est_minutes_remaining, t.text AS sql_statementFROM sys.dm_exec_requests rCROSS APPLY sys.dm_exec_sql_text(r.sql_handle) tWHERE r.command IN ('BULK INSERT', 'INSERT'); -- Check transaction log usageSELECT name, log_reuse_wait_desc, ROUND(CAST(size AS FLOAT) * 8 / 1024, 2) AS log_size_mb, ROUND(CAST(size AS FLOAT) * 8 / 1024 * (1 - CAST(max_size AS FLOAT) / size), 2) AS log_used_mbFROM sys.master_filesWHERE database_id = DB_ID() AND type_desc = 'LOG'; -- MySQL: Monitor LOAD DATASHOW PROCESSLIST; -- Check InnoDB buffer pool hit rateSHOW STATUS LIKE 'Innodb_buffer_pool_read%'; -- Monitor table row count during loadSELECT TABLE_ROWS, DATA_LENGTH, INDEX_LENGTHFROM information_schema.TABLESWHERE TABLE_NAME = 'products';Bulk operations transform data processing from impossible to effortless. Operations that would take days complete in minutes with proper techniques.
Module Complete:
You've now mastered the complete Data Manipulation Language—INSERT, UPDATE, DELETE, MERGE, and bulk operations. You can add, modify, and remove data efficiently at any scale, from single rows to billions of records.
Congratulations! You now have comprehensive knowledge of SQL Data Manipulation Language. From single-row operations to billion-row bulk loads, you can write DML that is correct, efficient, and production-ready. Next, explore SQL Execution Flow to understand how the database processes these statements internally.