Loading content...
A backup stored in the same location as your production data shares its vulnerabilities. When fire consumes the data center, flood damages the server room, or ransomware encrypts every connected system, backups stored onsite become casualties of the same disaster they were meant to protect against.
Offsite storage is the geographic dimension of data protection.
By maintaining backup copies in physically separate locations, organizations create resilience against site-level disasters—the catastrophic events that can destroy an entire facility. This isn't theoretical risk management; it's learned wisdom from organizations that discovered their 'comprehensive backup strategy' failed because every copy was in the building that burned down.
Modern offsite storage encompasses physical tape vaulting, cloud storage integration, multi-region replication, and air-gapped architectures that protect against both natural disasters and sophisticated cyber attacks.
By the end of this page, you will understand how to design offsite storage architectures that protect against site-level disasters while maintaining acceptable recovery times. You will learn cloud integration patterns, air-gap strategies, and the geographic considerations that govern enterprise data protection.
Onsite-only backup strategies create a single point of failure at the facility level. Understanding the threats that offsite storage mitigates helps design appropriate protection levels.
Site-level threats:
| Threat Category | Examples | Impact Without Offsite | Offsite Mitigation |
|---|---|---|---|
| Natural Disasters | Fire, flood, earthquake, hurricane, tornado | Complete data loss if facility destroyed | Geographic separation ensures survival |
| Infrastructure Failure | Power grid failure, cooling system fire, structural collapse | Extended outage, potential data loss | Alternative site enables recovery |
| Ransomware/Malware | Network-propagating encryption, wiper malware | All connected backups encrypted/deleted | Air-gapped copies remain clean |
| Insider Threat | Malicious destruction, sabotage | All accessible backups compromised | Separation of access and control |
| Regional Events | Extended power outage, civil unrest, pandemic lockdown | Inability to access or operate facility | Remote site maintains operations |
| Regulatory Action | Facility seizure, legal holds affecting all onsite systems | All copies potentially inaccessible | Jurisdiction-separated copies available |
Modern ransomware specifically targets backup systems. Attackers spend weeks inside networks identifying and compromising backup infrastructure before triggering encryption. Any backup accessible from the network—including replicated copies—is at risk. True protection requires backups that are physically disconnected or logically inaccessible to compromised systems.
The 3-2-1 rule:
The industry-standard 3-2-1 backup rule provides a baseline for offsite requirements:
Modern extensions enhance this:
┌─────────────────────────────────────────────────────────────────┐
│ 3-2-1-1-0 BACKUP STRATEGY │
├─────────────────────────────────────────────────────────────────┤
│ │
│ PRODUCTION ONSITE BACKUP OFFSITE BACKUP │
│ ┌─────────┐ ┌─────────────┐ ┌─────────────────┐ │
│ │ Primary │ ──▶ │ Disk Array │ ──▶ │ Cloud/Remote DC │ │
│ │ DB │ │ (Copy 1) │ │ (Copy 2) │ │
│ └─────────┘ └─────────────┘ └─────────────────┘ │
│ │ │
│ │ AIR-GAPPED COPY │
│ │ ┌─────────────┐ │
│ └────────▶ │ Tape Vault │ (Physically Disconnected) │
│ │ (Copy 3) │ │
│ └─────────────┘ │
│ │
│ + Regular verification (0 errors) │
└─────────────────────────────────────────────────────────────────┘
Cloud storage has transformed offsite backup from a logistical challenge to a configuration decision. Major cloud providers offer durable, geographically distributed storage with built-in redundancy and global accessibility.
Cloud storage advantages:
| Provider | Standard Storage | Archive Storage | Deep Archive | Key Features |
|---|---|---|---|---|
| AWS | S3 Standard | S3 Glacier | Glacier Deep Archive | Cross-region replication, Object Lock, Lifecycle policies |
| Azure | Blob Storage (Hot) | Cool/Archive | Archive | Immutable storage, Geo-redundancy, Blob versioning |
| GCP | Cloud Storage Standard | Nearline/Coldline | Archive | Multi-regional, Object versioning, Retention policies |
| Backblaze | B2 Cloud Storage | — | — | S3-compatible API, Simple pricing, Immutability |
| Wasabi | Wasabi Hot Storage | — | — | No egress fees, S3-compatible, Immutability |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179
#!/usr/bin/env python3"""Cloud Offsite Backup Configuration ExamplesDemonstrates backup upload, cross-region replication, and immutability""" import boto3from datetime import datetime, timedeltaimport hashlibimport os class CloudBackupManager: """ Manages offsite backup operations to cloud storage """ def __init__(self, primary_bucket: str, dr_bucket: str, primary_region: str = 'us-east-1', dr_region: str = 'us-west-2'): self.primary_bucket = primary_bucket self.dr_bucket = dr_bucket # Separate clients for different regions self.s3_primary = boto3.client('s3', region_name=primary_region) self.s3_dr = boto3.client('s3', region_name=dr_region) def upload_backup(self, file_path: str, database_name: str, backup_type: str, retention_days: int = 30) -> dict: """ Upload backup to cloud with metadata and retention """ timestamp = datetime.now().strftime('%Y%m%d_%H%M%S') object_key = f"backups/{database_name}/{backup_type}/{timestamp}.backup" # Calculate checksum for integrity verification with open(file_path, 'rb') as f: file_hash = hashlib.sha256(f.read()).hexdigest() # Metadata for tracking metadata = { 'database': database_name, 'backup-type': backup_type, 'created-at': datetime.now().isoformat(), 'source-host': os.uname().nodename, 'sha256-checksum': file_hash, 'retention-days': str(retention_days) } # Upload with server-side encryption self.s3_primary.upload_file( file_path, self.primary_bucket, object_key, ExtraArgs={ 'Metadata': metadata, 'ServerSideEncryption': 'aws:kms', 'SSEKMSKeyId': 'alias/backup-key', 'StorageClass': 'STANDARD_IA' # Infrequent Access for backups } ) return { 'bucket': self.primary_bucket, 'key': object_key, 'checksum': file_hash, 'size': os.path.getsize(file_path) } def configure_cross_region_replication(self): """ Configure cross-region replication for disaster recovery Note: Bucket versioning must be enabled """ replication_config = { 'Role': 'arn:aws:iam::ACCOUNT:role/BackupReplicationRole', 'Rules': [ { 'ID': 'BackupDRReplication', 'Status': 'Enabled', 'Priority': 1, 'Filter': { 'Prefix': 'backups/' }, 'Destination': { 'Bucket': f'arn:aws:s3:::{self.dr_bucket}', 'StorageClass': 'GLACIER', # Archive in DR region 'EncryptionConfiguration': { 'ReplicaKmsKeyID': 'arn:aws:kms:us-west-2:ACCOUNT:key/dr-key' }, 'ReplicationTime': { 'Status': 'Enabled', 'Time': {'Minutes': 15} # 15-minute SLA }, 'Metrics': { 'Status': 'Enabled', 'EventThreshold': {'Minutes': 15} } }, 'DeleteMarkerReplication': {'Status': 'Disabled'} } ] } self.s3_primary.put_bucket_replication( Bucket=self.primary_bucket, ReplicationConfiguration=replication_config ) def enable_object_lock(self, retention_days: int): """ Enable Object Lock for immutable backup storage This prevents deletion even by administrators for the lock period """ # Note: Object Lock must be enabled at bucket creation # This sets the default retention for new objects self.s3_primary.put_object_lock_configuration( Bucket=self.primary_bucket, ObjectLockConfiguration={ 'ObjectLockEnabled': 'Enabled', 'Rule': { 'DefaultRetention': { 'Mode': 'GOVERNANCE', # or 'COMPLIANCE' for stricter 'Days': retention_days } } } ) def verify_replication_status(self, object_key: str) -> dict: """ Verify backup has been replicated to DR region """ try: # Check if object exists in DR bucket dr_response = self.s3_dr.head_object( Bucket=self.dr_bucket, Key=object_key ) # Verify checksums match primary_response = self.s3_primary.head_object( Bucket=self.primary_bucket, Key=object_key ) return { 'replicated': True, 'primary_etag': primary_response['ETag'], 'dr_etag': dr_response['ETag'], 'checksums_match': primary_response['ETag'] == dr_response['ETag'], 'dr_storage_class': dr_response.get('StorageClass', 'STANDARD'), 'dr_last_modified': dr_response['LastModified'] } except self.s3_dr.exceptions.NoSuchKey: return { 'replicated': False, 'error': 'Object not found in DR bucket' } # Lifecycle policy for automated tieringLIFECYCLE_POLICY = { 'Rules': [ { 'ID': 'BackupLifecycle', 'Status': 'Enabled', 'Filter': {'Prefix': 'backups/'}, 'Transitions': [ {'Days': 30, 'StorageClass': 'STANDARD_IA'}, {'Days': 90, 'StorageClass': 'GLACIER'}, {'Days': 365, 'StorageClass': 'DEEP_ARCHIVE'} ], 'Expiration': {'Days': 2555}, # ~7 years 'NoncurrentVersionTransitions': [ {'NoncurrentDays': 30, 'StorageClass': 'GLACIER'} ], 'NoncurrentVersionExpiration': {'NoncurrentDays': 90} } ]}Cloud Object Lock (AWS S3 Object Lock, Azure Immutable Blob Storage) creates backups that cannot be deleted or modified for a specified period—even by administrators with full access. This provides ransomware protection for cloud-based offsite copies. Use COMPLIANCE mode for the strongest protection.
Despite the rise of cloud storage, physical tape vaulting remains a cornerstone of enterprise data protection. Tape offers unique advantages for offsite storage, particularly for air-gapped protection and long-term archival.
Why tape still matters:
Tape vault operations:
Professional tape vaulting involves structured processes for media management:
Vault facility requirements:
| Requirement | Specification | Purpose |
|---|---|---|
| Climate Control | 62-68°F, 35-45% humidity | Media longevity |
| Fire Suppression | Gas-based (FM-200, Novec) | Protect media without water damage |
| Physical Security | 24/7 guards, biometric access | Prevent unauthorized access |
| Geographic Separation | Minimum 50 miles from primary | Survive regional disasters |
| Seismic Considerations | Earthquake-resistant construction | Platform stability |
| Insurance | Media and data liability coverage | Financial protection |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125
-- Tape Vault Media Tracking Schema-- Comprehensive tracking for physical offsite media management CREATE TABLE tape_media ( barcode VARCHAR(20) PRIMARY KEY, media_type VARCHAR(20) NOT NULL, -- LTO-7, LTO-8, LTO-9 native_capacity_gb INT NOT NULL, purchase_date DATE NOT NULL, first_use_date DATE, write_pass_count INT DEFAULT 0, status VARCHAR(20) DEFAULT 'available', -- available, in_use, in_transit, at_vault, retired current_location VARCHAR(50), last_verified_date DATE, error_count INT DEFAULT 0, notes TEXT); CREATE TABLE tape_contents ( content_id UUID PRIMARY KEY, barcode VARCHAR(20) REFERENCES tape_media(barcode), database_name VARCHAR(100) NOT NULL, backup_type VARCHAR(20) NOT NULL, backup_date TIMESTAMP NOT NULL, expiration_date DATE, size_bytes BIGINT NOT NULL, encrypted BOOLEAN DEFAULT true, encryption_key_id VARCHAR(100), file_count INT, verification_status VARCHAR(20), verified_at TIMESTAMP); CREATE TABLE tape_movements ( movement_id UUID PRIMARY KEY, barcode VARCHAR(20) REFERENCES tape_media(barcode), movement_type VARCHAR(20) NOT NULL, -- ship_to_vault, return_from_vault, internal_transfer from_location VARCHAR(50), to_location VARCHAR(50), courier_name VARCHAR(100), tracking_number VARCHAR(50), requested_by VARCHAR(100), requested_at TIMESTAMP DEFAULT NOW(), shipped_at TIMESTAMP, received_at TIMESTAMP, received_by VARCHAR(100), chain_of_custody_verified BOOLEAN, notes TEXT); CREATE TABLE vault_locations ( location_id VARCHAR(50) PRIMARY KEY, vault_name VARCHAR(100) NOT NULL, address TEXT, contact_phone VARCHAR(20), sla_retrieval_hours INT, monthly_cost_per_slot DECIMAL(10,2), total_slots INT, used_slots INT, contract_end_date DATE); -- View: Media Currently at VaultCREATE VIEW vault_inventory ASSELECT v.vault_name, v.location_id, m.barcode, m.media_type, c.database_name, c.backup_date, c.expiration_date, c.size_bytes, m.last_verified_date, CASE WHEN c.expiration_date < CURRENT_DATE THEN 'expired' WHEN c.expiration_date < CURRENT_DATE + INTERVAL '30 days' THEN 'expiring_soon' ELSE 'active' END AS retention_statusFROM tape_media mJOIN vault_locations v ON m.current_location = v.location_idLEFT JOIN tape_contents c ON m.barcode = c.barcodeWHERE m.status = 'at_vault'; -- Procedure: Request Tape RetrievalCREATE OR REPLACE FUNCTION request_tape_retrieval( p_barcode VARCHAR(20), p_requested_by VARCHAR(100), p_urgency VARCHAR(20) -- standard, urgent, emergency) RETURNS UUID AS $$DECLARE v_movement_id UUID; v_current_location VARCHAR(50);BEGIN -- Get current location SELECT current_location INTO v_current_location FROM tape_media WHERE barcode = p_barcode; -- Create movement record v_movement_id := gen_random_uuid(); INSERT INTO tape_movements ( movement_id, barcode, movement_type, from_location, to_location, requested_by, notes ) VALUES ( v_movement_id, p_barcode, 'return_from_vault', v_current_location, 'primary_data_center', p_requested_by, 'Urgency: ' || p_urgency ); -- Update media status UPDATE tape_media SET status = 'in_transit' WHERE barcode = p_barcode; -- Log for alerting (vault team would be notified) -- In practice, this would trigger email/page to vault provider RETURN v_movement_id;END;$$ LANGUAGE plpgsql;Common tape rotation schemes include Tower of Hanoi (efficient media utilization, complex management), GFS (simple, higher media count), and Son-Father-Grandfather (balanced approach). The choice depends on retention requirements, media costs, and operational complexity tolerance.
An air-gapped backup is one that is physically or logically disconnected from production systems and networks. Air-gapping provides the ultimate protection against network-propagating threats like ransomware, advanced persistent threats (APTs), and insider attacks with elevated privileges.
Air-gap implementation approaches:
Data diode architecture:
Data diodes are hardware devices that enforce one-way data flow. Unlike firewalls (which can be misconfigured or compromised), a physical data diode makes bidirectional communication physically impossible.
┌──────────────────────────────────────────────────────────────────┐
│ DATA DIODE ARCHITECTURE │
├──────────────────────────────────────────────────────────────────┤
│ │
│ PRODUCTION SIDE DIODE AIR-GAPPED SIDE │
│ ┌────────────┐ ┌──────────┐ ┌────────────────┐ │
│ │ Backup │ │ │ │ Receiving │ │
│ │ Server │ ───▶ │ TX-Only │ ───▶ │ Agent │ │
│ └────────────┘ │ (No RX) │ └────────────────┘ │
│ └──────────┘ │ │
│ │ │
│ ▼ │
│ ┌────────────────┐ │
│ Commands CANNOT │ Immutable │ │
│ flow backwards ✗ │ Storage │ │
│ ◀──────────────────────────── └────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────┘
Ransomware-resistant backup design:
To be truly ransomware-resistant, air-gapped backups must:
A backup target accessible via network credentials (even 'restricted' ones) is NOT air-gapped. Sophisticated ransomware harvests credentials over weeks before striking. If your backup admin's credentials can delete backups, those backups are vulnerable. True air-gap requires architectural separation, not just access controls.
The physical distance and location of offsite storage affects recovery capability, compliance, and disaster resilience. Geographic decisions involve tradeoffs between protection scope and recovery speed.
Distance considerations:
| Distance | Protection Scope | Recovery Considerations | Use Cases |
|---|---|---|---|
| <10 miles | Same metropolitan area | Minutes to hours for physical retrieval; shared infrastructure risks | Fast recovery priority; limited disaster scope assumed |
| 10-50 miles | Regional separation | Hours for retrieval; may share power grid, weather patterns | Balanced approach for most organizations |
| 50-250 miles | Multi-region separation | Same-day to next-day retrieval; separate infrastructure | Protection from major regional events |
| 250+ miles | Geographic separation | 1-2 day retrieval for physical media; network recovery faster | Protection from large-scale disasters (hurricanes, earthquakes) |
| Cross-border | International | Customs/regulatory complexity; data sovereignty issues | Specific compliance requirements; extreme disaster protection |
Data sovereignty and compliance:
Where you store data matters legally, not just physically:
Multi-region cloud deployment:
Cloud providers enable geographic distribution without physical logistics:
# Multi-region backup strategy configuration
offsite_regions:
primary:
provider: aws
region: us-east-1
storage_class: STANDARD_IA
purpose: "Primary offsite, fast recovery"
secondary:
provider: aws
region: us-west-2
storage_class: GLACIER
purpose: "Cross-coast DR, ransomware protection"
replication: enabled
replication_sla_minutes: 15
tertiary:
provider: azure # Different provider for additional resilience
region: westeurope
storage_class: cool
purpose: "Cross-provider, EU compliance, extreme DR"
sync_frequency: daily
geographic_constraints:
eu_data:
allowed_regions: ["eu-west-1", "eu-central-1", "westeurope", "northeurope"]
cross_border_allowed: false
financial_data:
allowed_regions: ["us-east-1", "us-west-2"] # US only per compliance
encryption_required: true
Choose offsite locations that don't share failure modes with your primary site. If your primary is in Florida, an offsite in Miami shares hurricane risk. Consider locations in different seismic zones, power grids, and weather patterns. For cloud, avoid regions in the same AWS/Azure 'geography' that might share underlying infrastructure.
Organizations often conflate replication with backup, but they serve different purposes. Understanding when each applies is critical for effective offsite protection.
Key differences:
| Aspect | Replication | Backup | Implication |
|---|---|---|---|
| Purpose | High availability, fast failover | Data recovery, historical access | Different problems, different solutions |
| Data State | Current state only | Point-in-time snapshots | Replication propagates corruption; backup preserves clean state |
| RPO | Near-zero (seconds to minutes) | Minutes to hours | Replication for HA; backup for recovery |
| RTO | Sub-minute failover possible | Minutes to hours for restore | Replication for business continuity; backup for disaster recovery |
| Protection From | Hardware failure, maintenance | Data corruption, ransomware, human error | Replication fails against logical errors |
| Retention | Current + minimal history | Days to years | Backup provides historical recovery |
Replication is not backup. When a DBA accidentally drops a table, replication dutifully propagates that deletion to all replicas within seconds. When ransomware encrypts the database, replication encrypts all replicas. Replication protects against hardware failure; backup protects against data loss.
When to use each:
Use Replication for:
Use Backup for:
Best practice: Use both.
┌─────────────────────────────────────────────────────────────────┐
│ COMPREHENSIVE PROTECTION │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ Sync Replication ┌──────────┐ │
│ │ Primary │ ◄─────────────────────────▶│ Standby │ │
│ │ DB │ < 1 sec │ DB │ │
│ └────┬─────┘ └──────────┘ │
│ │ │
│ │ Hourly │
│ │ Backup │
│ ▼ │
│ ┌──────────────┐ │
│ │ Local Disk │ ─────────────▶ Cloud Storage │
│ │ Backup │ Async Copy (Offsite) │
│ └──────────────┘ │ │
│ │ Weekly │
│ ▼ │
│ ┌──────────────┐ │
│ │ Tape │ (Air-Gapped) │
│ │ Vault │ │
│ └──────────────┘ │
│ │
│ Replication: Handles hardware failure (RTO: seconds) │
│ Backup: Handles logical errors, ransomware (RTO: minutes/hrs) │
│ Air-Gap: Last resort recovery (RTO: hours/days) │
└─────────────────────────────────────────────────────────────────┘
Offsite storage is worthless if you cannot recover from it efficiently. Recovery planning must account for the access times, bandwidth limitations, and operational procedures associated with offsite data.
Recovery time components for offsite:
Retrieval time analysis:
| Storage Type | Retrieval Initiation | Transfer Time (10 TB) | Key Considerations |
|---|---|---|---|
| Cloud Standard | Instant | ~9 hrs @ 250 Mbps | Egress costs apply; concurrent streams possible |
| Cloud IA | Instant | ~9 hrs @ 250 Mbps | Higher retrieve cost; otherwise same as standard |
| S3 Glacier Flexible | 3-5 hours | +9 hrs transfer | Choose retrieval speed (tradeoff with cost) |
| S3 Glacier Deep Archive | 12-48 hours | +9 hrs transfer | Slowest cloud option; plan for delays |
| Tape Vault (courier) | 4-24 hours | Time to load & read | Physical transport dominates; verify media works |
| Tape Vault (disaster) | 24-48 hours | Variable | May need to ship to alternate facility |
Offsite recovery procedures:
#!/bin/bash
# Offsite Recovery Procedure: Cloud Storage (AWS S3 Glacier)
#
# This script initiates recovery from Glacier offsite storage
set -euo pipefail
BACKUP_BUCKET="company-backup-dr-west"
RECOVERY_POINT="2024-01-15"
DATABASE="production_orders"
RETRIEVAL_TIER="Expedited" # Expedited (1-5 min), Standard (3-5 hr), Bulk (5-12 hr)
# Step 1: Find the backup object
echo "Finding backup for $DATABASE on $RECOVERY_POINT..."
BACKUP_KEY=$(aws s3api list-objects-v2 \
--bucket "$BACKUP_BUCKET" \
--prefix "backups/$DATABASE/full/" \
--query "Contents[?contains(Key, '$RECOVERY_POINT')].[Key]" \
--output text | head -1)
if [ -z "$BACKUP_KEY" ]; then
echo "ERROR: No backup found for $RECOVERY_POINT"
exit 1
fi
echo "Found backup: $BACKUP_KEY"
# Step 2: Initiate Glacier restore
echo "Initiating Glacier restore (Tier: $RETRIEVAL_TIER)..."
aws s3api restore-object \
--bucket "$BACKUP_BUCKET" \
--key "$BACKUP_KEY" \
--restore-request '{"Days": 7, "GlacierJobParameters": {"Tier": "'$RETRIEVAL_TIER'"}}'
echo "Restore initiated. Waiting for object to become available..."
# Step 3: Poll for restore completion
while true; do
RESTORE_STATUS=$(aws s3api head-object \
--bucket "$BACKUP_BUCKET" \
--key "$BACKUP_KEY" \
--query "Restore" --output text 2>/dev/null || echo "pending")
if [[ "$RESTORE_STATUS" == *'ongoing-request="false"'* ]]; then
echo "Restore complete! Object is available."
break
fi
echo "Still restoring... waiting 60 seconds"
sleep 60
done
# Step 4: Download to local storage
echo "Downloading backup to local recovery storage..."
aws s3 cp "s3://$BACKUP_BUCKET/$BACKUP_KEY" "/recovery/$DATABASE/"
# Step 5: Verify checksum
echo "Verifying backup integrity..."
EXPECTED_HASH=$(aws s3api head-object \
--bucket "$BACKUP_BUCKET" \
--key "$BACKUP_KEY" \
--query "Metadata.sha256checksum" --output text)
ACTUAL_HASH=$(sha256sum "/recovery/$DATABASE/$(basename $BACKUP_KEY)" | cut -d' ' -f1)
if [ "$EXPECTED_HASH" == "$ACTUAL_HASH" ]; then
echo "✓ Checksum verified. Ready for restore."
else
echo "✗ CHECKSUM MISMATCH! Backup may be corrupted."
exit 1
fi
echo "Recovery preparation complete. Proceed with database restore."
For truly critical databases, consider keeping the most recent backup in a faster-access tier (Standard or IA) even if archiving older copies to Glacier. The cost difference for a single recent backup is minimal compared to hours of delay during a crisis.
Offsite storage transforms backup from local protection to disaster resilience. By maintaining copies in geographically separate, appropriately secured locations, organizations survive events that would otherwise cause complete data loss.
What's next:
With offsite storage established, we move to encryption—the practice of protecting backup data from unauthorized access whether at rest, in transit, or in the hands of third parties. Encryption ensures that even if backup media is lost or stolen, the data remains protected.
You now understand how to design offsite storage architectures that protect against site-level disasters while maintaining acceptable recovery times. Next, we'll explore encryption strategies that protect backup data at every stage of its lifecycle.