Loading content...
Beneath every database system lies a control panel of configuration parameters—hundreds of knobs that determine how memory is allocated, how queries are parallelized, how data is written to disk, and how connections are managed. Most DBAs never touch these settings, accepting defaults that were designed for generic workloads on modest hardware.
This is a missed opportunity. Configuration tuning can unlock 2x, 5x, or even 10x performance improvements without changing a single line of application code or adding expensive hardware. The default PostgreSQL configuration, for example, assumes you're running on a machine with 128MB of RAM—wildly inappropriate for a modern server with 128GB.
This page teaches you to understand and optimize the configuration parameters that matter most, transforming your database from a generic installation into a finely-tuned engine matched to your specific workload.
By the end of this page, you will understand the key configuration categories affecting performance, how to calculate optimal values for your hardware and workload, database-specific tuning recommendations, and safe methodologies for implementing changes in production environments.
Database configuration parameters fall into several functional categories. Understanding these categories helps organize your tuning efforts and recognize dependencies between related settings.
| Category | Impact Area | Key Parameters |
|---|---|---|
| Memory Allocation | Buffer cache, sorting, hashing, caching | Buffer pool size, shared buffers, work_mem, sort buffer |
| Connection Management | Concurrent users, resource isolation | Max connections, connection pooling, thread pools |
| Write-Ahead Logging | Durability, crash recovery, replication | WAL size, checkpoint frequency, sync settings |
| Query Execution | Parallelism, optimization, timeouts | Parallel workers, cost parameters, statement timeout |
| I/O Configuration | Disk access patterns, async I/O | Read-ahead, I/O schedulers, tablespace settings |
| Logging & Auditing | Troubleshooting, compliance | Slow query log, error log verbosity, audit trails |
| Replication | High availability, read scaling | Replication lag, sync mode, failover settings |
The 80/20 Rule of Configuration:
While databases may have hundreds of configurable parameters, roughly 20% of them account for 80% of performance impact. Focus your tuning efforts on:
Most other parameters can remain at defaults unless you're addressing specific issues.
Many configuration changes require server restart to take effect. Some affect all sessions immediately. Others can destabilize a system if set incorrectly. Always test changes in non-production environments, implement one change at a time, and have rollback plans ready.
Memory configuration is the highest-impact tuning area. Databases use memory for caching data, processing queries, sorting results, and building hash tables. Proper memory allocation minimizes disk I/O—the slowest operation in any database system.
The Memory Hierarchy:
L1/L2/L3 CPU Cache: Nanoseconds (automatic, not configurable)
↓
RAM (Buffer Pool/Cache): Microseconds (CONFIGURABLE - major impact)
↓
SSD Storage: Milliseconds (100-1000x slower than RAM)
↓
HDD Storage: Milliseconds (10x slower than SSD)
Every byte kept in the buffer cache instead of read from disk represents a massive performance win.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
-- POSTGRESQL MEMORY CONFIGURATION -- View current memory settingsSHOW shared_buffers;SHOW work_mem;SHOW maintenance_work_mem;SHOW effective_cache_size; -- ═══════════════════════════════════════════════════════════════-- SHARED_BUFFERS: Main buffer cache for table and index data-- ═══════════════════════════════════════════════════════════════-- Recommendation: 25% of total RAM (but no more than 8-16GB)-- Beyond 8-16GB, PostgreSQL relies on OS cache-- Example for 64GB server:-- shared_buffers = '16GB' -- Why not more? PostgreSQL double-buffers with OS page cache-- Large shared_buffers can increase checkpoint time -- ═══════════════════════════════════════════════════════════════-- WORK_MEM: Memory per operation (sorts, hashes, bitmap operations)-- ═══════════════════════════════════════════════════════════════-- Caution: Setting is PER OPERATION, not per query or session-- A complex query might use work_mem 10+ times-- Formula: (Total RAM - shared_buffers) / (max_connections * 5) -- For 64GB RAM, 16GB shared_buffers, 100 connections:-- (64GB - 16GB) / (100 * 5) = ~96MB per operation-- work_mem = '64MB' -- Conservative starting point -- Monitor for spills to disk (indicates work_mem too small):SELECT query, temp_blks_read, temp_blks_writtenFROM pg_stat_statementsWHERE temp_blks_written > 0ORDER BY temp_blks_written DESC; -- ═══════════════════════════════════════════════════════════════-- MAINTENANCE_WORK_MEM: Memory for maintenance operations-- ═══════════════════════════════════════════════════════════════-- Used by: VACUUM, CREATE INDEX, ALTER TABLE ADD FOREIGN KEY-- Can be much larger than work_mem (only one operation at a time)-- Recommendation: 1-2GB for most systems-- maintenance_work_mem = '2GB' -- ═══════════════════════════════════════════════════════════════-- EFFECTIVE_CACHE_SIZE: Planner hint for disk cache available-- ═══════════════════════════════════════════════════════════════-- Not an allocation, just tells planner what to expect-- Recommendation: 50-75% of total RAM-- effective_cache_size = '48GB' -- For 64GB server -- ═══════════════════════════════════════════════════════════════-- Sample postgresql.conf for 64GB RAM server-- ═══════════════════════════════════════════════════════════════/*# Memory Configurationshared_buffers = '16GB'work_mem = '64MB'maintenance_work_mem = '2GB'effective_cache_size = '48GB' # Huge pages (Linux only, reduces memory overhead)huge_pages = try # Hash table sizinghash_mem_multiplier = 2.0 -- PostgreSQL 13+*/Monitor your buffer cache hit ratio—the percentage of page requests served from memory versus disk. A healthy OLTP system should have 99%+ hit ratio. If consistently below 95%, increase buffer pool size or investigate unexpected data access patterns.
Each database connection consumes memory and system resources. Poorly configured connection limits lead to either resource exhaustion (too many connections) or artificial bottlenecks (too few connections). Connection pooling is essential for high-concurrency applications.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
-- POSTGRESQL CONNECTION CONFIGURATION -- View current settingsSHOW max_connections;SELECT count(*) as current_connections FROM pg_stat_activity; -- ═══════════════════════════════════════════════════════════════-- MAX_CONNECTIONS: Hard limit on connections-- ═══════════════════════════════════════════════════════════════-- Formula: CPU cores × 4 (for typical OLTP)-- Or calculated with memory:-- (Available RAM - shared_buffers) / (work_mem × queries_per_connection)-- Default 100 is often too low for modern applications -- For 16-core server with 64GB RAM:-- max_connections = 200 -- IMPORTANT: Each connection reserves ~10MB minimum-- 1000 connections = 10GB just for connection overhead -- ═══════════════════════════════════════════════════════════════-- SUPERUSER_RESERVED_CONNECTIONS: Reserved for admin access-- ═══════════════════════════════════════════════════════════════-- When max_connections is reached, superusers can still connect-- superuser_reserved_connections = 3 (default, usually fine) -- ═══════════════════════════════════════════════════════════════-- CONNECTION POOLING: PgBouncer (external pooler)-- ═══════════════════════════════════════════════════════════════-- PgBouncer sits between application and PostgreSQL-- Maintains small pool of actual connections-- Application connects to PgBouncer, which multiplexes /* pgbouncer.ini example:[databases]mydb = host=127.0.0.1 port=5432 dbname=mydb [pgbouncer]listen_port = 6432listen_addr = *auth_type = md5auth_file = /etc/pgbouncer/userlist.txt # Pool mode: session, transaction, or statementpool_mode = transaction # Best for most OLTP workloads max_client_conn = 1000 # Application connections to PgBouncerdefault_pool_size = 25 # Connections per database/user pairmin_pool_size = 5 # Minimum idle connectionsreserve_pool_size = 5 # Extra connections for burst*/ -- Monitor connection usageSELECT datname as database, usename as user, state, count(*) as connectionsFROM pg_stat_activityGROUP BY datname, usename, stateORDER BY connections DESC;Modern web applications should ALWAYS use connection pooling. Establishing a new database connection for each request is wasteful (50-200ms overhead) and quickly exhausts connection limits. Use application-level pooling (HikariCP, c3p0) or database-level pooling (PgBouncer, ProxySQL).
Write-Ahead Logging (WAL) ensures durability—data isn't lost if the server crashes. But WAL configuration involves trade-offs between durability, performance, and recovery time. Write-heavy workloads benefit enormously from WAL tuning.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879
-- POSTGRESQL WAL CONFIGURATION -- View current settingsSHOW wal_level;SHOW wal_buffers;SHOW checkpoint_completion_target;SHOW max_wal_size; -- ═══════════════════════════════════════════════════════════════-- SYNCHRONOUS_COMMIT: Durability vs Performance trade-off-- ═══════════════════════════════════════════════════════════════/*Values:- on (default): Wait for WAL flush to disk- off: Don't wait for WAL flush (risk: lose last ~600ms on crash)- local: Fsync locally, don't wait for replicas- remote_write: Wait for replica to receive (not flush)- remote_apply: Wait for replica to apply*/ -- For less critical data, gain performance:SET synchronous_commit = off; -- Session level-- ALTER DATABASE mydb SET synchronous_commit = off; -- Database level -- ═══════════════════════════════════════════════════════════════-- WAL_BUFFERS: Shared memory for WAL before disk write-- ═══════════════════════════════════════════════════════════════-- Default: 1/32 of shared_buffers, max 16MB-- Usually auto-calculation is fine-- wal_buffers = -1 # Auto-tune (default in PG 9.1+) -- For very high write volume:-- wal_buffers = '64MB' -- ═══════════════════════════════════════════════════════════════-- CHECKPOINT CONFIGURATION-- ═══════════════════════════════════════════════════════════════-- Checkpoints flush all dirty buffers to disk-- More frequent = faster recovery, more I/O interruption-- Less frequent = slower recovery, smoother I/O -- Size-based trigger (primary control)-- max_wal_size = '4GB' # Trigger checkpoint when WAL reaches this-- min_wal_size = '2GB' # Keep at least this much WAL -- Time-based trigger (secondary)-- checkpoint_timeout = '15min' # Maximum time between checkpoints -- Spread I/O over checkpoint interval-- checkpoint_completion_target = 0.9 # Use 90% of interval for checkpoint -- ═══════════════════════════════════════════════════════════════-- FULL_PAGE_WRITES: Protection against partial page writes-- ═══════════════════════════════════════════════════════════════-- Default ON (recommended)-- Writes full page image after each checkpoint-- Can be disabled if filesystem guarantees atomic 8KB writes-- full_page_writes = on -- ═══════════════════════════════════════════════════════════════-- Sample configuration for write-heavy workload-- ═══════════════════════════════════════════════════════════════/*# WAL Configurationwal_level = replicawal_buffers = 64MBwal_writer_delay = 200mscommit_delay = 100 # Group commits (busy systems)commit_siblings = 5 # Checkpointsmax_wal_size = 8GBmin_wal_size = 4GBcheckpoint_timeout = 30mincheckpoint_completion_target = 0.9 # Consider for less critical data# synchronous_commit = off*/Reducing durability settings (synchronous_commit=off, innodb_flush_log_at_trx_commit=2, delayed durability) can dramatically improve write performance but risk losing recently committed transactions in a crash. Only use for data that can tolerate potential loss (session data, caches) or when replicas provide redundancy.
Query execution parameters control parallelism, optimizer behavior, and resource limits. Proper tuning ensures queries use available hardware efficiently and prevents runaway queries from impacting the system.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192
-- POSTGRESQL QUERY EXECUTION CONFIGURATION -- ═══════════════════════════════════════════════════════════════-- PARALLELISM SETTINGS-- ═══════════════════════════════════════════════════════════════ -- Maximum workers per gather (parallel query)-- max_parallel_workers_per_gather = 4-- Recommendation: CPU cores / 2 for OLTP; -- CPU cores - 2 for analytical -- Total parallel workers across all queries-- max_parallel_workers = 8-- Should not exceed max_worker_processes -- Total background workers-- max_worker_processes = 8-- Includes all worker types (maintenance, replication, parallel) -- Thresholds for parallel plans-- min_parallel_table_scan_size = '8MB'-- min_parallel_index_scan_size = '512kB'-- Lower values = more parallelism -- Parallel cost estimation-- parallel_setup_cost = 1000-- parallel_tuple_cost = 0.1-- Lower values = optimizer prefers parallel plans -- ═══════════════════════════════════════════════════════════════-- OPTIMIZER COST PARAMETERS-- ═══════════════════════════════════════════════════════════════ -- Random page cost relative to sequential (4.0 default)-- For SSD: random_page_cost = 1.1 to 1.5-- For spinning disk: random_page_cost = 4.0 -- CPU cost parameters (rarely need adjustment)-- cpu_tuple_cost = 0.01-- cpu_index_tuple_cost = 0.005-- cpu_operator_cost = 0.0025 -- ═══════════════════════════════════════════════════════════════-- TIMEOUTS AND LIMITS-- ═══════════════════════════════════════════════════════════════ -- Statement timeout (kill long queries)SET statement_timeout = '30s'; -- Session level-- ALTER DATABASE mydb SET statement_timeout = '60s'; -- Lock acquisition timeoutSET lock_timeout = '5s'; -- Idle transaction timeout (PG 14+)SET idle_in_transaction_session_timeout = '5min'; -- ═══════════════════════════════════════════════════════════════-- PLANNER CONFIGURATION-- ═══════════════════════════════════════════════════════════════ -- Join collapse limit (tables to consider for join reordering)-- join_collapse_limit = 8 (default)-- Higher = better plans for complex joins, longer planning time -- FROM clause subquery collapse-- from_collapse_limit = 8 -- Enable/disable specific plan types (debugging only)-- enable_seqscan = on-- enable_indexscan = on-- enable_hashjoin = on-- enable_mergejoin = on -- ═══════════════════════════════════════════════════════════════-- SAMPLE OPTIMIZED CONFIGURATION-- ═══════════════════════════════════════════════════════════════/*# Parallelism (16-core server)max_worker_processes = 16max_parallel_workers_per_gather = 4max_parallel_workers = 8max_parallel_maintenance_workers = 2 # Cost adjustments for SSDrandom_page_cost = 1.1effective_io_concurrency = 200 # Timeoutsstatement_timeout = '60s'lock_timeout = '5s'idle_in_transaction_session_timeout = '10min'*/For OLTP systems with many concurrent users, limit parallelism to prevent any single query from monopolizing CPU. For analytical systems with few concurrent queries, higher parallelism speeds up complex scans and aggregations. There's no one-size-fits-all answer—profile your workload.
Different workload types have fundamentally different configuration requirements. Rather than tuning individual parameters, start with a profile matching your workload, then refine based on observed behavior.
OLTP (Online Transaction Processing): Many short transactions, high concurrency, reads and writes, low latency requirements.
1234567891011121314151617
# PostgreSQL OLTP Profile (64GB RAM, 16 cores)shared_buffers = 16GBwork_mem = 32MB # Conservative - many connectionsmaintenance_work_mem = 2GBeffective_cache_size = 48GB max_connections = 200max_parallel_workers_per_gather = 2 # Low parallelism random_page_cost = 1.1 # SSD assumedeffective_io_concurrency = 200 checkpoint_completion_target = 0.9max_wal_size = 4GB statement_timeout = 30slock_timeout = 5sPGTune (pgtu.ne), MySQLTuner, and similar tools generate configuration recommendations based on hardware specs and workload type. Use them as starting points, not final answers. The best configurations emerge from iterative tuning based on production monitoring.
Configuration tuning unlocks database performance potential that defaults leave untapped. Let's consolidate the key principles:
What's Next:
Configuration tuning sets the stage, but memory tuning deserves deeper examination. Buffer pool behavior, sort and hash memory, caching strategies, and memory pressure all require specialized understanding. The next page dives into memory tuning in depth.
You now understand database configuration tuning principles across major database platforms. These settings form the foundation upon which query and index tuning build. Next, we'll explore memory tuning in detail—understanding how databases use memory and how to optimize allocation.