Loading learning content...
If you could only add one technology to your stack beyond your primary database, a strong argument could be made for Redis. Created by Salvatore Sanfilippo in 2009, Redis (Remote Dictionary Server) has become the de facto standard for caching, session management, real-time analytics, messaging, and dozens of other use cases that require sub-millisecond latency and high throughput.
What sets Redis apart from other key-value stores is its rich set of native data structures. While Memcached treats values as opaque blobs, Redis understands strings, lists, sets, sorted sets, hashes, bitmaps, hyperloglogs, and streams. This structural awareness enables sophisticated operations—atomic list pushes, set intersections, sorted set range queries—that would require complex application logic with a simpler store.
Redis is fast. Extraordinarily fast. Single-threaded by design (for command processing), it can handle hundreds of thousands of operations per second on commodity hardware. And with Redis Cluster, it scales horizontally to handle millions of operations per second across terabytes of data.
By the end of this page, you will master Redis's data structures, understand when to use each, and comprehend Redis's persistence options (RDB vs AOF). You'll understand Redis's threading model, memory management, and the architectural decisions that make it so performant.
Redis's performance stems from deliberate architectural choices that prioritize simplicity and speed. Understanding these choices is essential for using Redis effectively.
In-memory data store:
Redis keeps all data in RAM. This is its most fundamental characteristic:
Single-threaded command processing:
Redis processes commands using a single thread. This sounds limiting but is actually a key performance optimization:
But isn't single-threaded slow?
Not when operations are fast. A single Redis thread can execute 500,000+ simple operations per second because:
Redis 6.0+ threading:
Modern Redis uses multiple threads for I/O (network reads/writes) while maintaining single-threaded command processing. This improves throughput for I/O-bound workloads without sacrificing the simplicity of single-threaded execution.
┌─────────────────────────────────────────────────────────────┐
│ Redis Server │
├─────────────────────────────────────────────────────────────┤
│ I/O Threads (multiple) │ Main Thread (single) │
│ ───────────────────────── │ ──────────────────────── │
│ • Read client requests │ • Parse commands │
│ • Write responses │ • Execute commands │
│ • Network multiplexing │ • Update data structures │
│ │ • Trigger persistence │
└─────────────────────────────────────────────────────────────┘
Because Redis is single-threaded, slow commands block everything. Running KEYS * on a large dataset blocks all clients for seconds. Use SCAN for iteration. Avoid O(N) commands on large collections in production.
Redis strings are the most basic and most used data type. Despite the name, they're not just text—they're binary-safe sequences of bytes that can hold any data up to 512MB.
What strings can store:
Core string operations:
| Command | Description | Complexity | Example |
|---|---|---|---|
| SET key value | Set string value | O(1) | SET user:1:name "Alice" |
| GET key | Get string value | O(1) | GET user:1:name → "Alice" |
| SETNX key value | Set if not exists | O(1) | SETNX lock:resource "owner" |
| SETEX key ttl value | Set with expiration | O(1) | SETEX session:abc 3600 "data" |
| INCR/DECR key | Atomic increment/decrement | O(1) | INCR page:views:home → 42 |
| INCRBY key amount | Increment by specific amount | O(1) | INCRBY user:1:balance 100 |
| APPEND key value | Append to existing value | O(1) | APPEND log:entry " more text" |
| STRLEN key | Get string length | O(1) | STRLEN user:1:name → 5 |
| MGET/MSET | Multiple get/set | O(N) | MGET key1 key2 key3 |
Pattern: Atomic counters
# Page view counter - atomically increment
INCR page:views:home
INCR page:views:home
GET page:views:home → "2"
# Rate limiting - increment and check in one operation
MULTI
INCR ratelimit:api:user:123
EXPIRE ratelimit:api:user:123 60
EXEC
Pattern: Distributed locking
# Acquire lock with protection against deadlocks
SET lock:order-processing "server-1" NX EX 30
# NX = only if not exists
# EX 30 = expires in 30 seconds
# If SET returns OK, lock acquired
# If SET returns nil, lock held by another
# Release lock (only if we own it - use Lua for atomicity)
EVAL "if redis.call('get',KEYS[1]) == ARGV[1] then return redis.call('del',KEYS[1]) else return 0 end" 1 lock:order-processing server-1
Pattern: Caching with TTL
# Cache API response for 5 minutes
SETEX api:response:products:category:electronics 300 '{"products": [...]}'
# Check cache, fall back to database if miss
GET api:response:products:category:electronics
Redis optimizes string storage based on content. Small integers (0-9999) use a shared object pool. Strings under 44 bytes use an embedded encoding (stored inline with the key object). Larger strings use raw allocation. This automatic optimization keeps memory usage efficient.
Redis's collection types—Lists, Sets, and Sorted Sets—transform it from a simple cache into a powerful data processing platform. Each serves distinct use cases with specialized operations.
Lists: Ordered sequences
Redis lists are linked lists of strings. They support efficient insertion and removal at both ends, making them ideal for queues, stacks, and recent-item lists.
| Command | Description | Complexity |
|---|---|---|
| LPUSH/RPUSH key element | Add to left/right end | O(1) |
| LPOP/RPOP key | Remove from left/right end | O(1) |
| LRANGE key start stop | Get range of elements | O(S+N) |
| LLEN key | Get list length | O(1) |
| LINDEX key index | Get element at index | O(N) |
| BLPOP/BRPOP key timeout | Blocking pop (for queues) | O(1) |
List patterns:
# Recent notifications (capped list)
LPUSH notifications:user:123 '{"type":"like","from":456}'
LTRIM notifications:user:123 0 99 # Keep only last 100
LRANGE notifications:user:123 0 9 # Get 10 most recent
# Job queue (producer-consumer)
RPUSH queue:emails '{"to":"a@b.com","subject":"Hello"}'
BLPOP queue:emails 30 # Worker blocks until job available
Sets: Unique unordered collections
Redis sets are unordered collections of unique strings. They enable membership tests, deduplication, and powerful set operations (union, intersection, difference).
| Command | Description | Complexity |
|---|---|---|
| SADD key member | Add member to set | O(1) |
| SREM key member | Remove member | O(1) |
| SISMEMBER key member | Check membership | O(1) |
| SMEMBERS key | Get all members | O(N) |
| SCARD key | Get set cardinality | O(1) |
| SINTER keys... | Set intersection | O(N*M) |
| SUNION keys... | Set union | O(N) |
| SDIFF keys... | Set difference | O(N) |
Set patterns:
# Track unique visitors
SADD visitors:2024-01-15 "user:123"
SADD visitors:2024-01-15 "user:456"
SCARD visitors:2024-01-15 → 2 (unique count)
# Common friends
SINTER friends:user:123 friends:user:456 → common friends
# Tagging system
SADD product:123:tags "electronics" "sale" "featured"
SMEMBERS product:123:tags → all tags
SINTER tag:electronics tag:sale → products in both categories
Sorted Sets: Ordered unique collections with scores
Sorted sets combine set uniqueness with ordering by score. Each member has an associated floating-point score that determines its position. This enables leaderboards, time-series data, and priority queues.
| Command | Description | Complexity |
|---|---|---|
| ZADD key score member | Add with score | O(log N) |
| ZREM key member | Remove member | O(log N) |
| ZSCORE key member | Get member's score | O(1) |
| ZRANK key member | Get rank (0-indexed) | O(log N) |
| ZRANGE key start stop | Get range by rank | O(log N + M) |
| ZRANGEBYSCORE key min max | Get range by score | O(log N + M) |
| ZINCRBY key increment member | Increment score | O(log N) |
| ZCARD key | Get cardinality | O(1) |
Sorted set patterns:
# Real-time leaderboard
ZADD leaderboard:game:chess 2500 "player:magnus"
ZADD leaderboard:game:chess 2400 "player:hikaru"
ZINCRBY leaderboard:game:chess 10 "player:hikaru" # Win increases rating
ZREVRANK leaderboard:game:chess "player:hikaru" # Get ranking (0 = top)
ZREVRANGE leaderboard:game:chess 0 9 WITHSCORES # Top 10 with scores
# Rate limiting with sliding window
local_time = 1705320000
ZADD ratelimit:user:123 <timestamp> <request_id>
ZREMRANGEBYSCORE ratelimit:user:123 0 <timestamp-60> # Remove old
ZCARD ratelimit:user:123 # Count in window
Sorted sets use two data structures internally: a skip list for O(log N) ordered operations and a hash table for O(1) member-score lookups. This dual structure gives sorted sets their powerful combination of fast range queries and fast point lookups.
Beyond lists and sets, Redis offers hashes for structured objects and specialized types for specific use cases.
Hashes: Field-value maps within a key
Redis hashes are perfect for representing objects with multiple fields. Instead of serializing an entire object as a JSON string, you store individual fields that can be read and updated independently.
| Command | Description | Complexity |
|---|---|---|
| HSET key field value | Set field value | O(1) |
| HGET key field | Get field value | O(1) |
| HMSET key field value... | Set multiple fields | O(N) |
| HMGET key field... | Get multiple fields | O(N) |
| HGETALL key | Get all fields and values | O(N) |
| HINCRBY key field amount | Increment field value | O(1) |
| HDEL key field | Delete field | O(1) |
| HEXISTS key field | Check field exists | O(1) |
Hash patterns:
# User profile with independent fields
HSET user:123 name "Alice" email "alice@example.com" balance 1000
HGET user:123 name → "Alice"
HINCRBY user:123 balance 50 # Add $50 without reading whole object
HGETALL user:123 → {name: "Alice", email: "...", balance: "1050"}
# Shopping cart
HSET cart:user:123 prod:456 2 prod:789 1 # productId -> quantity
HINCRBY cart:user:123 prod:456 1 # Add one more of product 456
HGETALL cart:user:123 # Get entire cart
Hash vs String (JSON):
Advanced data types:
Bitmaps (Bit arrays):
Redis strings can be manipulated as bit arrays. Extremely memory-efficient for tracking boolean states across millions of entities.
# User login tracking (1 bit per user)
SETBIT logins:2024-01-15 123 1 # User 123 logged in
SETBIT logins:2024-01-15 456 1 # User 456 logged in
GETBIT logins:2024-01-15 123 → 1 (did log in)
BITCOUNT logins:2024-01-15 → 2 (total logins)
# Users active on both days
BITOP AND active:both logins:2024-01-15 logins:2024-01-16
HyperLogLog (Cardinality estimation):
Estimate unique counts with ~0.81% error using only 12KB regardless of cardinality.
# Count unique visitors (millions) with constant memory
PFADD visitors:2024-01 "user:123" "user:456" "user:123" # Duplicates ignored
PFCOUNT visitors:2024-01 → 2 (approximate unique count)
# Merge months for quarterly uniques
PFMERGE visitors:Q1-2024 visitors:2024-01 visitors:2024-02 visitors:2024-03
Streams (Append-only log):
Redis Streams provide a persistent, consumer-group-aware message log—Redis's answer to Kafka.
# Append events to stream
XADD events:orders * action create orderId 12345 amount 99.99
XADD events:orders * action payment orderId 12345 status success
# Read events (consumer group)
XREADGROUP GROUP processors worker1 STREAMS events:orders >
Redis uses memory-efficient encodings for small collections: ziplist for small lists/hashes/sets, intset for small integer sets. These compact encodings are transparent—Redis automatically upgrades to standard structures when size exceeds thresholds.
Redis is an in-memory database, but it offers two persistence mechanisms to survive restarts: RDB (Redis Database Backup) snapshots and AOF (Append-Only File) logs. Understanding when to use each—or both—is crucial for production deployments.
RDB: Point-in-Time Snapshots
RDB persistence saves the entire dataset as a compact binary file at specified intervals. Think of it as taking a photograph of your data at a moment in time.
How RDB works:
Fork behavior (critical for production):
The fork() system call creates a child process with a copy of parent's memory. On Linux, this uses copy-on-write: memory pages are shared until modified. This means:
| Setting | Description | Example |
|---|---|---|
| save <seconds> <changes> | Trigger snapshot when N changes in M seconds | save 900 1 (every 15min if 1+ change) |
| stop-writes-on-bgsave-error | Block writes if snapshot fails | yes (recommended for data safety) |
| rdbcompression | Compress strings in RDB file | yes (CPU vs disk trade-off) |
| rdbchecksum | Add checksum for integrity | yes (recommended) |
| dbfilename | RDB filename | dump.rdb |
If Redis crashes between snapshots, you lose all writes since the last snapshot. With 'save 300 10' (every 5 minutes if 10+ changes), you could lose up to 5 minutes of data. For ephemeral caching, this is often acceptable. For sessions or important state, consider AOF.
AOF: Write-Ahead Logging
AOF persistence logs every write operation to a file. On restart, Redis replays the log to reconstruct the dataset. This provides much better durability than RDB at the cost of larger files and slower restarts.
How AOF works:
Sync policies (the durability/performance trade-off):
| Policy | Behavior | Durability | Performance |
|---|---|---|---|
| always | Sync after every write | Maximum—lose at most 1 command | Slow (~1000 ops/sec) |
| everysec | Sync every second | Good—lose at most 1 second | Good (default, recommended) |
| no | Let OS decide when to sync | Poor—OS buffer can be lost | Maximum throughput |
AOF Rewriting:
The AOF file grows continuously as operations are appended. Rewriting creates a minimal AOF that represents the current dataset:
# Original AOF (verbose)
SET key 1
INCR key
INCR key
INCR key
DEL key
SET key 100
# After rewrite (minimal)
SET key 100
Rewriting happens in a background process (similar to RDB snapshots) and atomically replaces the old AOF.
Configuration:
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100 # Rewrite when AOF doubles since last rewrite
auto-aof-rewrite-min-size 64mb # Don't rewrite if AOF is tiny
Hybrid persistence (Redis 4.0+):
Redis can combine RDB and AOF: the AOF file starts with an RDB snapshot, followed by incremental AOF entries. This gives fast restarts (RDB) with excellent durability (AOF).
aof-use-rdb-preamble yes
Choosing a persistence strategy:
| Use Case | Recommended Strategy | Rationale |
|---|---|---|
| Pure cache (can rebuild) | No persistence or RDB | Data is reconstructable; RDB for faster warm restart |
| Session store | AOF (everysec) | Losing sessions is bad UX; 1-sec loss acceptable |
| Job queue | AOF (everysec) | Lost jobs might mean lost work; durability matters |
| Leaderboard/counters | RDB or AOF everysec | Depends on accuracy requirements |
| Primary data store | AOF (always) + replica | Maximum durability if Redis is source of truth |
| Disaster recovery | RDB + ship to S3 | Point-in-time backups for recovery |
For most production deployments: enable AOF with everysec fsync, use RDB for backups (manual or scheduled), and maintain at least one replica. This provides good durability, easy backups, and high availability. Only disable persistence for true ephemeral caching.
Since Redis stores everything in memory, understanding memory behavior is essential for production operations. Running out of memory is catastrophic; Redis will stop accepting writes or crash.
Memory usage components:
Monitoring memory:
INFO memory
# used_memory: 1234567 # Total allocated by Redis
# used_memory_human: 1.18M
# used_memory_rss: 2345678 # Memory from OS perspective
# used_memory_peak: 3456789 # Historical maximum
# maxmemory: 1073741824 # Configured limit
# maxmemory_policy: allkeys-lru # Eviction policy
Eviction policies:
When maxmemory is reached, Redis must decide what to do. The eviction policy controls this:
| Policy | Behavior | Use Case |
|---|---|---|
| noeviction | Return error on writes when full | Strict capacity enforcement |
| allkeys-lru | Evict least recently used keys | General-purpose cache (recommended) |
| allkeys-lfu | Evict least frequently used keys | Workloads with stable hotspots |
| allkeys-random | Evict random keys | When all keys equally important |
| volatile-lru | Evict LRU keys with TTL set | Mix of cache and persistent data |
| volatile-lfu | Evict LFU keys with TTL set | Cache with TTL + persistent data |
| volatile-random | Evict random keys with TTL | Random eviction limited to TTL keys |
| volatile-ttl | Evict keys closest to expiring | TTL-based priority |
LRU vs LFU:
Memory optimization techniques:
Watch mem_fragmentation_ratio in INFO memory. Values >1.5 indicate significant fragmentation. This happens when Redis allocates and frees many different-sized values. Fragmentation wastes memory—used_memory_rss (actual) is much higher than used_memory (useful). Restart or use MEMORY PURGE to defragment.
We've explored Redis's architecture, data structures, and persistence—the foundation for using Redis effectively in production. Let's consolidate the key concepts:
What's next:
With Redis's architecture and features understood, we'll explore Memcached—the simpler, specialized caching system that influenced Redis's design. Understanding Memcached helps you choose between them and appreciate Redis's additional capabilities.
You now understand Redis's architecture, data structures, persistence mechanisms, and memory management. This knowledge enables you to use Redis effectively for caching, real-time features, and data processing—and to operate it reliably in production.