Loading learning content...
While real-time scheduling provides powerful control over CPU allocation, most applications don't need—and shouldn't use—real-time priorities. For the vast majority of workloads, Linux provides a simpler, safer mechanism: nice values.
The concept of 'niceness' dates back to Unix in the 1970s. A process with a high nice value is 'nice' to other processes—it yields CPU time. A process with a low (negative) nice value is 'not nice'—it demands more CPU time. This simple model has proven remarkably durable, surviving decades of scheduler evolution.
In the CFS era, nice values take on precise mathematical meaning. They're not vague hints—they're directly converted to scheduling weights that determine exact CPU proportions. Understanding this conversion is essential for effective system tuning.
By the end of this page, you will understand: (1) The nice value range and its historical origins, (2) How nice values map to CFS weights, (3) The practical CPU share impact of nice value differences, (4) User and administrator interfaces for managing nice values, and (5) Best practices and common patterns for nice value usage.
Nice values provide a user-accessible priority mechanism that operates within the CFS scheduling class. Unlike real-time priorities, nice values don't provide absolute preemption—they influence proportional CPU allocation.
The Nice Value Range
Nice values range from -20 to +19:
This inverted scale is a Unix tradition. Higher numerical values mean lower priority because a 'nicer' process yields to others.
Historical Context
The nice command and concept originated in early Unix (Version 4, 1973). The original implementation was simple: nice value was subtracted from a process's priority, making higher nice values run less often. While modern schedulers like CFS are far more sophisticated, the user-facing interface remains unchanged.
Key Properties:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
#!/bin/bash# Nice Value Basics on Linux # ========================================# View current nice values# ======================================== # View nice value of current processecho "Current shell nice value: $(nice)" # View nice values of all processesps -eo pid,ni,comm --sort=-ni | head -20 # More detailed view including scheduling classps -eo pid,class,ni,rtprio,comm | head -20 # ========================================# Using the 'nice' command# ======================================== # Run a command with nice value +10 (lower priority)nice -n 10 ./cpu_intensive_job # Run with nice value +19 (lowest priority)nice -n 19 tar -czf backup.tar.gz /home # Run with nice value -5 (higher priority, requires root)sudo nice -n -5 important_service # Maximum high priority (requires root)sudo nice -n -20 critical_process # ========================================# Using 'renice' to change running processes# ======================================== # Increase nice value of a running process (anyone can do this)renice +5 -p 1234 # Decrease nice value (make more important, requires root)sudo renice -5 -p 1234 # Change nice value for all processes of a usersudo renice +10 -u user_running_heavy_compilation # Change nice value for a process groupsudo renice +15 -g 5678 # ========================================# View nice value in /proc# ======================================== # For a specific PID (field 18 in stat)cat /proc/1234/stat | awk '{print $19}' # Human-readable in statusgrep ^nice /proc/1234/status # Note: shows 'nice' as signed int| Nice Value | Priority Level | Typical Use Case |
|---|---|---|
| -20 to -11 | Very High | Critical system services, time-sensitive daemons |
| -10 to -1 | High | Important user applications, database servers |
| 0 | Normal (Default) | Interactive applications, typical processes |
| +1 to +9 | Below Normal | Background tasks that should yield to interactive work |
| +10 to +19 | Low/Background | Batch jobs, backups, non-urgent compilation |
Anyone can make themselves 'nicer' (increase nice value), but only privileged users can make themselves 'less nice' (decrease nice value). This prevents unprivileged users from starving others. Once you increase your nice value, you cannot lower it again without privileges—be careful with testing!
In CFS, nice values aren't used directly—they're converted to weights that determine virtual runtime accumulation. This mapping is carefully designed to provide consistent, predictable CPU share differences between nice levels.
The 10% Rule
CFS's weight table is designed so that each nice level represents approximately 10% CPU share difference relative to adjacent levels. If two tasks with nice values differing by 1 are competing:
The Weight Table
The kernel defines a lookup table mapping nice values to weights:
1234567891011121314151617181920212223242526272829303132333435363738394041424344
/* * Nice-to-weight table from kernel/sched/core.c * * The design principle: * - Adjacent nice levels have ~1.25 weight ratio * - This gives ~10% CPU difference between adjacent levels * - Nice 0 has weight 1024 (convenient reference) * - Range spans factor of ~5917x (88761 / 15) */const int sched_prio_to_weight[40] = { /* nice -20 */ 88761, 71755, 56483, 46273, 36291, /* nice -15 */ 29154, 23254, 18705, 14949, 11916, /* nice -10 */ 9548, 7620, 6100, 4904, 3906, /* nice -5 */ 3121, 2501, 1991, 1586, 1277, /* nice 0 */ 1024, 820, 655, 526, 423, /* nice +5 */ 335, 272, 215, 172, 137, /* nice +10 */ 110, 87, 70, 56, 45, /* nice +15 */ 36, 29, 23, 18, 15,}; /* * Weight ratios between adjacent nice levels: * * nice(-20) / nice(-19) = 88761 / 71755 = 1.237 * nice(-10) / nice(-9) = 9548 / 7620 = 1.253 * nice(0) / nice(1) = 1024 / 820 = 1.249 * nice(10) / nice(11) = 110 / 87 = 1.264 * * All ratios approximately 1.25 (within ~5%) */ /* * CPU share calculation example: * * Three tasks with nice -5, 0, +5: * weight(-5) = 3121 * weight(0) = 1024 * weight(+5) = 335 * total = 4480 * * CPU_share(-5) = 3121 / 4480 = 69.7% * CPU_share(0) = 1024 / 4480 = 22.9% * CPU_share(+5) = 335 / 4480 = 7.5% */Why 1.25 Ratio?
The ~1.25 ratio between adjacent nice levels was chosen after significant empirical testing. Key considerations:
Granularity: With 40 nice levels (-20 to +19), each level needs to be meaningful but not too aggressive
Range: The full nice range spans ~5917:1 in weight (88761/15). This is enough to effectively deprioritize background tasks while supporting truly critical foreground work
Usability: 10% is a perceptible and manageable difference. Administrators can reason about 'this task needs about 30% more CPU' → 'nice it by -3'
Impact on Virtual Runtime
Recall that vruntime accumulates as:
delta_vruntime = delta_exec × (NICE_0_LOAD / weight)
For a nice +10 process (weight 110) vs nice 0 (weight 1024):
Only relative weights matter. A system with two nice-0 tasks allocates 50% each, but so does a system with two nice-15 tasks. The absolute weight values are arbitrary—chosen for implementation convenience (avoiding floating point). What matters is the ratio between competing tasks.
Understanding the exact CPU share impact of nice values enables precise capacity planning and workload management. Let's work through the mathematics.
The CPU Share Formula
For a set of competing tasks, each task i with weight wᵢ receives CPU share:
CPU_share(i) = wᵢ / Σ(wⱼ)
Where the sum is over all runnable tasks. This is the fundamental fairness guarantee of CFS.
Worked Examples
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
"""CPU Share Calculator for Nice Values Demonstrates how nice values translate to actual CPU allocation.""" # Full nice-to-weight table from Linux kernelNICE_TO_WEIGHT = { -20: 88761, -19: 71755, -18: 56483, -17: 46273, -16: 36291, -15: 29154, -14: 23254, -13: 18705, -12: 14949, -11: 11916, -10: 9548, -9: 7620, -8: 6100, -7: 4904, -6: 3906, -5: 3121, -4: 2501, -3: 1991, -2: 1586, -1: 1277, 0: 1024, 1: 820, 2: 655, 3: 526, 4: 423, 5: 335, 6: 272, 7: 215, 8: 172, 9: 137, 10: 110, 11: 87, 12: 70, 13: 56, 14: 45, 15: 36, 16: 29, 17: 23, 18: 18, 19: 15,} def calculate_cpu_shares(tasks: dict) -> dict: """ Calculate CPU shares for competing tasks. Args: tasks: dict mapping task name to nice value Returns: dict mapping task name to (weight, cpu_share_percent) """ weights = {name: NICE_TO_WEIGHT[nice] for name, nice in tasks.items()} total_weight = sum(weights.values()) shares = {} for name, weight in weights.items(): share = (weight / total_weight) * 100 shares[name] = (weight, share) return shares def print_scenario(name: str, tasks: dict): """Pretty-print a scenario's CPU allocation.""" print(f"\n{'='*60}") print(f"Scenario: {name}") print(f"{'='*60}") shares = calculate_cpu_shares(tasks) print(f"{'Task':<20} {'Nice':>6} {'Weight':>8} {'CPU Share':>12}") print("-" * 50) for task_name, (weight, share) in sorted(shares.items(), key=lambda x: -x[1][1]): nice = tasks[task_name] print(f"{task_name:<20} {nice:>6} {weight:>8} {share:>11.1f}%") # Example Scenarios # Scenario 1: Web server with background backupprint_scenario("Web Server with Background Backup", { "nginx (web)": 0, "postgres (db)": -5, "rsync (backup)": 15,}) # Scenario 2: Desktop with compilationprint_scenario("Desktop with Background Compilation", { "firefox": 0, "music_player": 0, "make -j8": 10,}) # Scenario 3: Multiple priority levelsprint_scenario("Complex Server Workload", { "critical_service": -10, "normal_api": 0, "batch_processor": 5, "log_analyzer": 15,}) # Scenario 4: Two tasks, varying nice differenceprint("\n" + "="*60)print("Effect of Nice Difference Between Two Tasks")print("="*60)print(f"{'Task A Nice':>12} {'Task B Nice':>12} {'A Share':>10} {'B Share':>10} {'Ratio A:B':>10}")print("-" * 56) for diff in [1, 2, 5, 10, 15, 20, 39]: if diff <= 19: shares = calculate_cpu_shares({"A": 0, "B": diff}) a_share = shares["A"][1] b_share = shares["B"][1] ratio = a_share / b_share if b_share > 0 else float('inf') print(f"{0:>12} {diff:>12} {a_share:>9.1f}% {b_share:>9.1f}% {ratio:>10.1f}x") # Show how nice 0 vs nice 19 comparesshares = calculate_cpu_shares({"A": 0, "B": 19})print(f"\nNice 0 vs Nice 19: {shares['A'][1]/shares['B'][1]:.1f}x CPU difference") # Show extreme: nice -20 vs nice 19shares = calculate_cpu_shares({"A": -20, "B": 19})print(f"Nice -20 vs Nice 19: {shares['A'][1]/shares['B'][1]:.0f}x CPU difference")| Nice Difference | Higher Priority Share | Lower Priority Share | Ratio |
|---|---|---|---|
| 1 | 55.5% | 44.5% | 1.25:1 |
| 2 | 61.0% | 39.0% | 1.56:1 |
| 5 | 75.3% | 24.7% | 3.05:1 |
| 10 | 90.3% | 9.7% | 9.3:1 |
| 15 | 96.6% | 3.4% | 28:1 |
| 19 | 98.6% | 1.4% | 68:1 |
| 39 (max) | 99.98% | 0.02% | 5917:1 |
For background tasks: nice +15 to +19 gives foreground 25-70× more CPU when competing. For slightly-below-normal: nice +5 to +10. For priority services: nice -5 to -10 (requires root). Don't use -20 routinely—reserve it for genuinely critical processes.
Linux provides several system calls and library functions for managing nice values programmatically. Understanding these interfaces enables building priority-aware applications.
System Calls
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141
/* * Nice Value System Call Examples * * Demonstrates all primary interfaces for nice value management. */ #include <stdio.h>#include <unistd.h>#include <sys/time.h>#include <sys/resource.h>#include <errno.h> /* * Method 1: nice() - Simple relative adjustment * * Adds 'inc' to current nice value. * Returns new nice value on success, -1 on error. * * WARNING: nice(-1) returns -1 which looks like an error. * Always check errno to distinguish success from failure. */void demo_nice_syscall(void) { printf("\n=== nice() System Call ===\n"); errno = 0; /* Clear errno before call */ int result = nice(0); /* Get current nice without changing */ if (result == -1 && errno != 0) { perror("nice(0) failed"); } else { printf("Current nice value: %d\n", result); } /* Increase nice value by 5 (become lower priority) */ errno = 0; result = nice(5); if (result == -1 && errno != 0) { perror("nice(5) failed"); } else { printf("After nice(5): nice value = %d\n", result); } /* Trying to decrease nice (requires privileges) */ errno = 0; result = nice(-5); if (result == -1 && errno != 0) { perror("nice(-5) failed (expected without root)"); } else { printf("After nice(-5): nice value = %d\n", result); }} /* * Method 2: setpriority() / getpriority() - Flexible priority control * * Can set priority for: * - PRIO_PROCESS: Specific process * - PRIO_PGRP: Process group * - PRIO_USER: All processes of user * * Returns 0 on success for setpriority(). * getpriority() returns priority (-20 to 19) on success. */void demo_setpriority(void) { printf("\n=== setpriority()/getpriority() ===\n"); /* Get current process's priority */ errno = 0; int prio = getpriority(PRIO_PROCESS, 0); /* 0 = current process */ if (prio == -1 && errno != 0) { perror("getpriority failed"); } else { printf("Current process priority: %d\n", prio); } /* Set priority for current process */ if (setpriority(PRIO_PROCESS, 0, 10) == -1) { perror("setpriority to 10 failed"); } else { printf("Set priority to 10 succeeded\n"); } /* Set priority for a different process (if permitted) */ pid_t target_pid = 1234; /* Would be actual PID in real code */ if (setpriority(PRIO_PROCESS, target_pid, 15) == -1) { perror("setpriority for other process failed"); } /* Set priority for all processes of a user */ uid_t target_uid = 1000; /* Typically a normal user */ if (setpriority(PRIO_USER, target_uid, 5) == -1) { perror("setpriority for user failed (requires root)"); }} /* * Method 3: sched_setattr() - Modern scheduling control * * Allows setting scheduling policy (FIFO, RR, BATCH, etc.) * along with nice value in a single call. */#define _GNU_SOURCE#include <sched.h>#include <sys/syscall.h> void demo_sched_setattr(void) { printf("\n=== sched_setattr() with nice ===\n"); struct sched_attr { uint32_t size; uint32_t sched_policy; uint64_t sched_flags; int32_t sched_nice; /* ... other fields for RT/DEADLINE */ } attr = { .size = sizeof(attr), .sched_policy = SCHED_NORMAL, .sched_nice = 5, }; if (syscall(__NR_sched_setattr, 0, &attr, 0) == -1) { perror("sched_setattr failed"); } else { printf("Set SCHED_NORMAL with nice=5 via sched_setattr\n"); }} int main(void) { demo_nice_syscall(); demo_setpriority(); demo_sched_setattr(); return 0;} /* * Compilation and running: * * gcc -o nice_demo nice_syscalls.c * ./nice_demo # As regular user * sudo ./nice_demo # As root (can set negative nice) */inc to current nice value. Simple but limited (can't query, only adjust). Returns new nice value.POSIX specifies nice as process-wide, but Linux implements it per-thread. The setpriority() call with PRIO_PROCESS affects only the calling thread, not all threads in the process! For consistent per-process priority, set nice before creating threads, or iterate over all threads in /proc/[pid]/task/.
Effective use of nice values requires understanding common patterns and their trade-offs. Here are battle-tested approaches for various scenarios.
Pattern 1: Background Batch Processing
Run CPU-intensive batch jobs without impacting interactive work:
# Compilation in background
nice -n 15 make -j$(nproc)
# Video encoding
nice -n 19 ffmpeg -i input.mp4 -c:v libx265 output.mp4
# Scientific computation
nice -n 10 python simulate.py
Pattern 2: Priority Inversion Protection
Ensure important services get CPU even under load:
# Database server
sudo nice -n -5 mysql
# Low-latency web server
sudo nice -n -10 nginx
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788
#!/bin/bash# Nice Value Practical Patterns # ========================================# Pattern 3: Process Group Management# ======================================== # Start multiple related processes at low prioritynice -n 15 bash -c ' process_a & process_b & process_c & wait' # Or use cgroups for more control (modern approach)# systemd-run --scope --nice=15 make -j8 # ========================================# Pattern 4: Systemd Service Configuration# ======================================== # In /etc/systemd/system/myservice.service:# [Service]# Nice=-5# IOSchedulingClass=best-effort# IOSchedulingPriority=4 # Reload and apply:# sudo systemctl daemon-reload# sudo systemctl restart myservice # ========================================# Pattern 5: Automatic Nice for Specific Programs# ======================================== # Use shell alias for always-nice commandsalias transcode='nice -n 18 handbrake'alias backup='nice -n 19 ionice -c 3 rsync' # Or wrapper script in ~/bin/cat > ~/bin/batch-run << 'EOF'#!/bin/bash# Run any command at low priorityexec nice -n 15 ionice -c 2 -n 7 "$@"EOFchmod +x ~/bin/batch-run # Usage: batch-run make -j8 # ========================================# Pattern 6: Desktop Priority# ======================================== # Some desktop environments use niceness for window focus# GNOME Terminal, for example, may lower nice of unfocused tabs # For audio applications, ensure they're not de-prioritized:renice -5 $(pgrep pulseaudio)renice -5 $(pgrep pipewire) # ========================================# Pattern 7: Build Server Configuration# ======================================== # For a build server that also serves other purposes: # In /etc/profile.d/build-nice.sh:if [[ "$USER" == "builder" ]]; then renice -n 10 -p $$ >/dev/null 2>&1fi # Alternatively, PAM configuration in /etc/security/limits.d/builders.conf:# @builders - nice 10 # ========================================# Pattern 8: Monitoring Nice Value Impact# ======================================== # Watch scheduling behavior with perfsudo perf sched record -a sleep 10sudo perf sched latency # Or use scheduler debug infocat /proc/schedstat # View per-process scheduling statscat /proc/$(pgrep -f myprocess)/schedNice values only affect CPU scheduling. For truly low-impact background jobs, combine with ionice: nice -n 19 ionice -c 3 command sets both lowest CPU priority and idle-only I/O class. This ensures the job only gets resources when nothing else wants them.
While nice values remain useful, modern Linux resource control increasingly uses cgroups (control groups) for more sophisticated CPU allocation. Understanding the relationship between nice values and cgroups is essential for contemporary system administration.
cgroups vs Nice Values
| Feature | Nice Values | cgroups v2 CPU |
|---|---|---|
| Granularity | Per-process | Per-group |
| Hierarchy | Flat | Tree structure |
| Isolation | None (affects fairness) | Full (can guarantee minimums) |
| CPU caps | No | Yes (cpu.max) |
| Guaranteed CPU | Proportional share only | Can guarantee minimums |
| Container integration | Limited | Native |
How CFS Uses Both
CFS implements group scheduling through cgroups. Each cgroup has its own cpu.weight, which functions like nice values but for groups of processes. CFS then schedules hierarchically:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172
#!/bin/bash# cgroups v2 CPU Control Examples # ========================================# Systemd slice approach (recommended)# ======================================== # Create a low-priority slice for background worksudo mkdir -p /etc/systemd/system/background.slice.d/cat << 'EOF' | sudo tee /etc/systemd/system/background.slice.d/cpu.conf[Slice]# Weight relative to default (100 = normal, lower = less CPU)CPUWeight=10 # Optional: hard limit at 50% of one CPUCPUQuota=50%EOF # Run a command in the background slicesudo systemd-run --slice=background.slice make -j8 # ========================================# Direct cgroup v2 usage# ======================================== # Create a cgroupsudo mkdir /sys/fs/cgroup/mybackground # Set CPU weight (1-10000, default 100)echo 10 | sudo tee /sys/fs/cgroup/mybackground/cpu.weight # Move current shell into the cgroupecho $$ | sudo tee /sys/fs/cgroup/mybackground/cgroup.procs # Now this shell and its children are in the low-priority cgroupmake -j8 # Runs at 10% weight relative to default # ========================================# Nice within cgroups# ======================================== # Nice values still work WITHIN a cgroup!# # If cgroup has weight 10 (background):# - Task A at nice 0 (weight 1024)# - Task B at nice 10 (weight 110)# # The cgroup gets its 10% share, then A and B# compete within that share based on their nice values.## Net effect:# - A gets ~10% × (1024/(1024+110)) ≈ 9.0% of total CPU# - B gets ~10% × (110/(1024+110)) ≈ 1.0% of total CPU # ========================================# Systemd service cgroup configuration# ======================================== # In /etc/systemd/system/myservice.service:# [Service]# CPUWeight=50 # Relative to other services# CPUQuota=200% # At most 2 CPU cores# Nice=10 # Additional nice within cgroup# # MemoryMax=2G # Memory limit (also useful)# IOWeight=50 # I/O priority weight # View current cgroup CPU weightscat /sys/fs/cgroup/*/cpu.weight # Monitor cgroup CPU usagesystemd-cgtopWhen to Use Each
Use nice values when:
Use cgroups when:
Combining Both
The most sophisticated systems use cgroups for coarse allocation (between services, containers, users) and nice values for fine-grained control within those allocations. This provides both isolation at the service boundary and flexibility within each service.
Nice values have evolved from a simple user courtesy mechanism to one layer in a sophisticated resource control stack. Understanding both the traditional nice interface and modern cgroup-based controls enables effective resource management across the full range of Linux deployments.
Nice values provide the primary user-accessible mechanism for influencing CFS scheduling. Despite their simplicity, they enable meaningful workload prioritization without the complexity or risks of real-time scheduling.
Module Complete
This concludes our deep dive into Linux scheduling. We've explored:
Together, these concepts provide a comprehensive understanding of how Linux decides what runs when—one of the most fundamental questions in operating systems.
You have completed the Linux Scheduling module. You now understand the full stack of Linux process scheduling—from the mathematical foundations of CFS, through the data structures that make it efficient, to the real-time and priority mechanisms that enable diverse workloads. This knowledge enables you to diagnose scheduling issues, optimize system performance, and design applications that work harmoniously with the scheduler.