Loading learning content...
The database backup ecosystem spans a wide spectrum—from lightweight command-line utilities bundled with databases to sophisticated enterprise platforms managing petabytes across global infrastructure. Selecting the right tools is crucial: underpowered tools create operational burden and reliability risks, while over-engineered solutions add unnecessary complexity and cost.
Effective tool selection requires understanding your requirements across multiple dimensions: database platforms, consistency needs, performance expectations, retention policies, regulatory compliance, and operational capabilities. This page provides a comprehensive survey of backup tools, from native database utilities to enterprise platforms, enabling you to make informed selections for your environment.
By the end of this page, you will understand the categories of backup tools available, master the capabilities of native database backup utilities, evaluate enterprise backup platforms, recognize emerging cloud-native solutions, and be able to select appropriate tools for different scenarios.
Backup tools can be categorized along several dimensions. Understanding these categories helps frame tool evaluation and selection.
By Scope
By Backup Type
By Deployment Model
| Category | Best For | Considerations |
|---|---|---|
| Native Database Tools | Single-platform shops, maximum performance, direct control | Requires scripting for automation, limited cross-platform |
| Enterprise Platforms | Multi-database, compliance requirements, centralized management | Cost, complexity, vendor dependency |
| Cloud-Native | Cloud databases, managed services, minimal operations | Vendor lock-in, limited customization |
| General Purpose (rsync, tar) | Cold backup, simple requirements, Linux expertise | Requires careful coordination with database |
PostgreSQL provides a robust set of native backup tools, each suited for different use cases.
pg_dump - Logical Backup
The standard tool for logical backups, producing SQL or custom archive formats:
Capabilities:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
#!/bin/bash# pg_dump - PostgreSQL Logical Backup Tool # Basic SQL dumppg_dump mydb > /backup/mydb.sql # Custom format (compressed, parallel restore)pg_dump -Fc mydb > /backup/mydb.dump # Directory format with parallel dump (4 jobs)pg_dump -Fd -j 4 mydb -f /backup/mydb_dir/ # Tar formatpg_dump -Ft mydb > /backup/mydb.tar # Selective backup - specific schemapg_dump -n sales_schema mydb > /backup/sales_schema.sql # Selective backup - specific tablespg_dump -t 'orders*' -t customers mydb > /backup/order_tables.sql # Exclude large tablespg_dump --exclude-table=audit_log --exclude-table=session_data mydb > /backup/mydb_small.sql # Schema only (structure, no data)pg_dump -s mydb > /backup/mydb_schema.sql # Data only (use with existing schema)pg_dump -a mydb > /backup/mydb_data.sql # With compression (gzip via pipe)pg_dump mydb | gzip > /backup/mydb.sql.gz # Consistent dump with serializable isolationpg_dump --serializable-deferrable mydb > /backup/mydb_consistent.sql # Include BLOB datapg_dump -b mydb > /backup/mydb_with_blobs.sql # Generate INSERT statements instead of COPYpg_dump --inserts mydb > /backup/mydb_inserts.sql # All databases (pg_dumpall)pg_dumpall > /backup/all_databases.sql # Roles and tablespaces onlypg_dumpall --globals-only > /backup/globals.sqlpg_basebackup - Physical Backup
The standard tool for physical (file-level) backup:
Capabilities:
123456789101112131415161718192021222324252627282930313233343536373839
#!/bin/bash# pg_basebackup - PostgreSQL Physical Backup Tool # Basic backup with streamed WALpg_basebackup -D /backup/base -X stream -P # Tar format with compressionpg_basebackup -D /backup -Ft -z -X stream # With specific host/port/userpg_basebackup -h db.example.com -p 5432 -U replication_user -D /backup # Fast checkpoint (minimize wait time)pg_basebackup -D /backup --checkpoint=fast -X stream # With progress reportingpg_basebackup -D /backup -X stream -P -v # Maximum rate limiting (100MB/s) to limit I/O impactpg_basebackup -D /backup -X stream -r 100M # Create replication slot for backup (prevents WAL removal)pg_basebackup -D /backup -X stream -S backup_slot -C # With manifest for verification (PostgreSQL 13+)pg_basebackup -D /backup -X stream --manifest-checksums=SHA256 # Incremental backup preparation (PostgreSQL 17+)# Create manifest-only for incremental basepg_basebackup -D /backup/full --manifest-only -X stream # Verify backup integrity (PostgreSQL 13+)pg_verifybackup /backup/base # Tar to stdout (for piping to storage)pg_basebackup -D - -Ft -X stream | aws s3 cp - s3://bucket/backup.tar # Create standby directlypg_basebackup -D /var/lib/postgresql/standby -X stream -RpgBackRest - Enterprise-Grade PostgreSQL Backup
pgBackRest is a feature-rich backup tool specifically for PostgreSQL, offering capabilities beyond native tools:
Key Features:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
#!/bin/bash# pgBackRest - Enterprise PostgreSQL Backup # Configuration (/etc/pgbackrest/pgbackrest.conf)cat > /etc/pgbackrest/pgbackrest.conf <<EOF[global]repo1-path=/var/lib/pgbackrestrepo1-retention-full=2repo1-retention-diff=4compress-type=zstdcompress-level=3 [main]pg1-path=/var/lib/postgresql/14/mainpg1-port=5432 # Optional: S3 storage# repo1-type=s3# repo1-s3-bucket=my-backup-bucket# repo1-s3-endpoint=s3.amazonaws.com# repo1-s3-region=us-east-1# repo1-s3-key=ACCESS_KEY# repo1-s3-key-secret=SECRET_KEYEOF # Create stanza (initialize backup configuration)pgbackrest --stanza=main stanza-create # Full backuppgbackrest --stanza=main --type=full backup # Differential backup (changes since last full)pgbackrest --stanza=main --type=diff backup # Incremental backup (changes since last backup of any type)pgbackrest --stanza=main --type=incr backup # Check backup integritypgbackrest --stanza=main check # Verify backup (read and validate all files)pgbackrest --stanza=main --set=20240115-120000F verify # List backupspgbackrest --stanza=main info # Restore to original locationsudo systemctl stop postgresqlpgbackrest --stanza=main --delta restoresudo systemctl start postgresql # Restore to specific time (PITR)pgbackrest --stanza=main --delta \ --type=time "--target=2024-01-15 10:30:00" \ --target-action=promote restore # Expire old backups (based on retention policy)pgbackrest --stanza=main expireMySQL ecosystem offers several backup tools, from simple logical dumps to sophisticated physical backup solutions.
mysqldump - Logical Backup
The classic MySQL backup tool for logical exports:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
#!/bin/bash# mysqldump - MySQL Logical Backup Tool # Basic database dumpmysqldump -u root -p mydb > /backup/mydb.sql # All databasesmysqldump -u root -p --all-databases > /backup/all_dbs.sql # Consistent backup with single transaction (InnoDB)mysqldump -u root -p --single-transaction mydb > /backup/mydb_consistent.sql # With routines and triggersmysqldump -u root -p --routines --triggers mydb > /backup/mydb_full.sql # Include eventsmysqldump -u root -p --events mydb > /backup/mydb_with_events.sql # Specific tablesmysqldump -u root -p mydb orders customers > /backup/mydb_tables.sql # Where clause filtermysqldump -u root -p mydb orders --where="order_date > '2024-01-01'" > /backup/recent_orders.sql # Schema onlymysqldump -u root -p --no-data mydb > /backup/mydb_schema.sql # Data onlymysqldump -u root -p --no-create-info mydb > /backup/mydb_data.sql # With compressionmysqldump -u root -p mydb | gzip > /backup/mydb.sql.gz # Quick dump (skip sorting, less memory)mysqldump -u root -p --quick mydb > /backup/mydb_quick.sql # Extended insert for faster restoremysqldump -u root -p --extended-insert mydb > /backup/mydb_ext.sql # Lock all tables for MyISAM consistencymysqldump -u root -p --lock-all-tables mydb > /backup/mydb_locked.sql # Master data for replication setupmysqldump -u root -p --single-transaction --master-data=2 mydb > /backup/mydb_repl.sql # GTID compatiblemysqldump -u root -p --single-transaction --set-gtid-purged=AUTO mydb > /backup/mydb_gtid.sqlPercona XtraBackup - Physical Backup
The industry-standard open-source tool for MySQL physical backup:
Key Features:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
#!/bin/bash# Percona XtraBackup - MySQL Physical Backup Tool # Full backupxtrabackup --backup --user=root --password=secret \ --target-dir=/backup/full # With compression (qpress)xtrabackup --backup --compress --compress-threads=4 \ --user=root --password=secret --target-dir=/backup/compressed # Parallel backup threadsxtrabackup --backup --parallel=4 --user=root --password=secret \ --target-dir=/backup/parallel # Prepare backup (required before restore!)xtrabackup --prepare --target-dir=/backup/full # Incremental backup (based on full)xtrabackup --backup --user=root --password=secret \ --incremental-basedir=/backup/full \ --target-dir=/backup/incr1 # Second incremental (based on first incremental)xtrabackup --backup --user=root --password=secret \ --incremental-basedir=/backup/incr1 \ --target-dir=/backup/incr2 # Prepare incremental chainxtrabackup --prepare --apply-log-only --target-dir=/backup/fullxtrabackup --prepare --apply-log-only --target-dir=/backup/full \ --incremental-dir=/backup/incr1xtrabackup --prepare --target-dir=/backup/full \ --incremental-dir=/backup/incr2 # Streaming backup to another serverxtrabackup --backup --stream=xbstream --user=root --password=secret | \ ssh remote-server "xbstream -x -C /backup/remote" # Streaming with compressionxtrabackup --backup --stream=xbstream --compress --user=root --password=secret | \ ssh remote-server "xbstream -x -C /backup/remote" # Restore proceduresudo systemctl stop mysqlrm -rf /var/lib/mysql/*xtrabackup --copy-back --target-dir=/backup/fullchown -R mysql:mysql /var/lib/mysqlsudo systemctl start mysql # Validate backupxtrabackup --validate-backup --target-dir=/backup/fullMySQL Enterprise Backup
Oracle's commercial backup solution for MySQL (requires MySQL Enterprise subscription):
Key Features:
MariaDB includes Mariabackup, a fork of XtraBackup optimized for MariaDB. Usage is nearly identical to XtraBackup. For MariaDB 10.2+, use Mariabackup instead of XtraBackup for best compatibility with MariaDB-specific features like encrypted tables and compression.
Enterprise databases include sophisticated native backup tools designed for mission-critical deployments.
Oracle RMAN (Recovery Manager)
Oracle's comprehensive backup and recovery tool:
Key Features:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869
-- Oracle RMAN - Recovery Manager Examples -- Connect to RMAN-- $ rman target /-- $ rman target sys/password@orcl catalog rcat_user/password@rcat -- Configure RMAN settingsCONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;CONFIGURE BACKUP OPTIMIZATION ON;CONFIGURE CONTROLFILE AUTOBACKUP ON;CONFIGURE DEVICE TYPE DISK PARALLELISM 4;CONFIGURE COMPRESSION ALGORITHM 'HIGH' AS OF RELEASE 'DEFAULT';CONFIGURE ENCRYPTION FOR DATABASE ON;CONFIGURE ENCRYPTION ALGORITHM 'AES256'; -- Full database backup with compressionBACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG TAG 'DAILY_FULL'; -- Incremental backup strategy (Level 0 = base, Level 1 = changes)-- Level 0 - full baselineBACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'WEEKLY_LEVEL0'; -- Level 1 - differential (changes since last level 0 or 1)BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'DAILY_LEVEL1'; -- Level 1 - cumulative (changes since last level 0 only)BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE TAG 'DAILY_CUMUL'; -- Backup to specific locationBACKUP AS COMPRESSED BACKUPSET FORMAT '/backup/oracle/%d_%T_%s_%p.bkp' DATABASE PLUS ARCHIVELOG; -- Backup tablespaceBACKUP TABLESPACE users, data TAG 'TS_BACKUP'; -- Backup datafileBACKUP DATAFILE 4 TAG 'DATAFILE_BACKUP'; -- Backup archived logsBACKUP ARCHIVELOG ALL DELETE INPUT; -- Validate backup (check for corruption)VALIDATE DATABASE;VALIDATE BACKUPSET TAG 'DAILY_FULL'; -- List backupsLIST BACKUP SUMMARY;LIST BACKUP OF DATABASE; -- Restore and recoverRESTORE DATABASE;RECOVER DATABASE;ALTER DATABASE OPEN RESETLOGS; -- Point-in-time recoveryRUN { SET UNTIL TIME "TO_DATE('2024-01-15 10:30:00','YYYY-MM-DD HH24:MI:SS')"; RESTORE DATABASE; RECOVER DATABASE;}ALTER DATABASE OPEN RESETLOGS; -- Cross-check and delete obsoleteCROSSCHECK BACKUP;DELETE OBSOLETE;DELETE EXPIRED BACKUP;SQL Server BACKUP Command
SQL Server's native backup capabilities are powerful and well-integrated:
Key Features:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798
-- SQL Server BACKUP Examples -- Full database backup with compressionBACKUP DATABASE ProductionTO DISK = 'D:\Backup\Production_Full.bak'WITH COMPRESSION, CHECKSUM, STATS = 10, NAME = 'Production Full Backup', DESCRIPTION = 'Daily full backup'; -- Differential backup (changes since last full)BACKUP DATABASE ProductionTO DISK = 'D:\Backup\Production_Diff.bak'WITH DIFFERENTIAL, COMPRESSION, CHECKSUM, STATS = 10; -- Transaction log backupBACKUP LOG ProductionTO DISK = 'D:\Backup\Production_Log.trn'WITH COMPRESSION, CHECKSUM, STATS = 10; -- Striped backup to multiple files (parallel)BACKUP DATABASE ProductionTO DISK = 'D:\Backup\Prod_stripe1.bak', DISK = 'E:\Backup\Prod_stripe2.bak', DISK = 'F:\Backup\Prod_stripe3.bak', DISK = 'G:\Backup\Prod_stripe4.bak'WITH COMPRESSION, CHECKSUM, MAXTRANSFERSIZE = 4194304, BUFFERCOUNT = 50; -- Encrypted backupBACKUP DATABASE ProductionTO DISK = 'D:\Backup\Production_Encrypted.bak'WITH COMPRESSION, ENCRYPTION( ALGORITHM = AES_256, SERVER CERTIFICATE = BackupCertificate ); -- Mirror backup to two destinationsBACKUP DATABASE ProductionTO DISK = 'D:\Backup\Production.bak'MIRROR TO DISK = '\\backupserver\share\Production.bak'WITH FORMAT, COMPRESSION; -- Copy-only backup (doesn't affect backup chain)BACKUP DATABASE ProductionTO DISK = 'D:\Backup\Production_CopyOnly.bak'WITH COPY_ONLY, COMPRESSION; -- Verify backup integrityRESTORE VERIFYONLYFROM DISK = 'D:\Backup\Production_Full.bak'WITH CHECKSUM; -- View backup historySELECT s.database_name, s.backup_start_date, s.backup_finish_date, DATEDIFF(MINUTE, s.backup_start_date, s.backup_finish_date) AS Duration_Minutes, s.backup_size / 1048576 AS Size_MB, s.compressed_backup_size / 1048576 AS Compressed_MB, m.physical_device_nameFROM msdb.dbo.backupset sJOIN msdb.dbo.backupmediafamily m ON s.media_set_id = m.media_set_idWHERE s.database_name = 'Production'ORDER BY s.backup_start_date DESC; -- Restore with point-in-time recoveryRESTORE DATABASE ProductionFROM DISK = 'D:\Backup\Production_Full.bak'WITH NORECOVERY; RESTORE LOG ProductionFROM DISK = 'D:\Backup\Production_Log1.trn'WITH NORECOVERY; RESTORE LOG ProductionFROM DISK = 'D:\Backup\Production_Log2.trn'WITH STOPAT = '2024-01-15T10:30:00', RECOVERY;Enterprise backup platforms provide centralized management, multi-platform support, and advanced features beyond native tools.
Common Enterprise Backup Platforms
| Platform | Strengths | Database Support | Key Differentiator |
|---|---|---|---|
| Veeam Backup & Replication | VM integration, ransomware protection, instant recovery | SQL Server, Oracle, PostgreSQL, MySQL | Fastest recovery times, excellent UI/UX |
| Commvault Complete | Comprehensive coverage, deduplication, cloud mobility | All major databases + SAP, ERP | Single platform for all data protection |
| Veritas NetBackup | Scale, enterprise track record, AIX/HP-UX support | All major databases + mainframe | Largest install base, proven at scale |
| Cohesity DataProtect | Modern architecture, instant access, API-first | SQL Server, Oracle, PostgreSQL, MongoDB | Web-scale architecture, analytics |
| Dell EMC PowerProtect | Hardware integration, APEX cloud, deduplication | All major databases | Integration with Dell storage ecosystem |
| Rubrik | SLA automation, multi-cloud, immutability | SQL Server, Oracle, PostgreSQL, MySQL | Policy-based, zero-trust architecture |
Evaluation Criteria for Enterprise Platforms
When evaluating enterprise backup platforms for database protection, consider:
1. Database Agent Quality
2. Consistency Guarantees
3. Recovery Capabilities
4. Operational Integration
5. Scale and Cost
Enterprise platforms offer both agent-based (installed on database server) and agentless (via APIs, snapshots) backup. Agent-based typically provides better granularity and database awareness but requires more management. Agentless is simpler but may have limitations. Choose based on your recovery requirements and operational preferences.
Cloud databases typically include built-in backup capabilities, with additional services for enhanced protection.
AWS Database Backup
RDS Automated Backups:
AWS Backup (centralized):
Azure Database Backup
Azure SQL Backup:
Google Cloud Backup
Cloud SQL Backup:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596
#!/bin/bash# Cloud-Native Backup Management Examples # =====================================# AWS RDS# ===================================== # Create manual snapshotaws rds create-db-snapshot \ --db-instance-identifier mydb \ --db-snapshot-identifier mydb-manual-$(date +%Y%m%d) # Copy snapshot to another region (DR)aws rds copy-db-snapshot \ --source-db-snapshot-identifier arn:aws:rds:us-east-1:123456789:snapshot:mydb-manual \ --target-db-snapshot-identifier mydb-dr-copy \ --source-region us-east-1 \ --region us-west-2 # Restore to point in timeaws rds restore-db-instance-to-point-in-time \ --source-db-instance-identifier mydb \ --target-db-instance-identifier mydb-recovered \ --restore-time "2024-01-15T10:30:00Z" \ --db-instance-class db.r5.large # List snapshotsaws rds describe-db-snapshots \ --db-instance-identifier mydb \ --query 'DBSnapshots[*].[DBSnapshotIdentifier,SnapshotCreateTime,Status]' \ --output table # AWS Backup - Create backup planaws backup create-backup-plan --backup-plan '{ "BackupPlanName": "DailyDBBackup", "BackupPlanRule": [{ "RuleName": "DailyRule", "TargetBackupVaultName": "Default", "ScheduleExpression": "cron(0 5 * * ? *)", "StartWindowMinutes": 60, "CompletionWindowMinutes": 180, "Lifecycle": { "DeleteAfterDays": 30 } }]}' # =====================================# Azure SQL# ===================================== # Create database copy (snapshot)az sql db copy \ --resource-group myResourceGroup \ --server myserver \ --name mydatabase \ --dest-server myserver \ --dest-name mydatabase-copy # Point-in-time restoreaz sql db restore \ --resource-group myResourceGroup \ --server myserver \ --name mydatabase \ --dest-name mydatabase-restored \ --time "2024-01-15T10:30:00Z" # Configure long-term retentionaz sql db ltr-policy set \ --resource-group myResourceGroup \ --server myserver \ --database mydatabase \ --weekly-retention P4W \ --monthly-retention P12M \ --yearly-retention P5Y \ --week-of-year 1 # =====================================# Google Cloud SQL# ===================================== # Create on-demand backupgcloud sql backups create \ --instance=myinstance \ --description="Manual backup $(date +%Y-%m-%d)" # Restore from backupgcloud sql backups restore BACKUP_ID \ --restore-instance=myinstance # Clone instance to point in timegcloud sql instances clone myinstance myinstance-clone \ --point-in-time="2024-01-15T10:30:00Z" # List backupsgcloud sql backups list --instance=myinstanceCloud-native backups are convenient but may have limitations: (1) Limited retention periods for automated backups, (2) Cross-region copies incur additional costs, (3) Restore may create new instances rather than in-place recovery, (4) Less granular recovery options than native tools. Evaluate whether cloud backups alone meet your RPO/RTO requirements.
Selecting the right backup tools requires balancing multiple factors. Use this framework to guide your decisions:
| Scenario | Recommended Tools | Rationale |
|---|---|---|
| Single PostgreSQL database, small team | pg_dump + pg_basebackup, scripts | Native tools are free, reliable, sufficient |
| PostgreSQL with serious RPO requirements | pgBackRest | PITR, encryption, verification, incremental |
| MySQL/MariaDB production | XtraBackup + mysqldump | Physical for DR, logical for portability |
| Oracle enterprise | RMAN + Data Guard | Best-in-class for Oracle, tightly integrated |
| SQL Server enterprise | Native BACKUP + enterprise platform | Strong native tools, supplement with management |
| Multi-database environment | Enterprise platform (Veeam, Commvault) | Unified management, cross-platform |
| Cloud-native (RDS, Cloud SQL) | Cloud-native + cross-region copy | Native integration, minimal operations |
| Hybrid cloud | Enterprise platform with cloud support | Consistent policy across environments |
You now have a comprehensive understanding of the backup tool landscape—from native database utilities to enterprise platforms and cloud-native solutions. This knowledge enables you to select appropriate tools for your environment and requirements. Next, we'll explore backup storage strategies, covering storage media, retention, and architectural considerations.