Loading content...
We've traced the boot journey from BIOS/UEFI through bootloader, kernel loading, and init process creation. But a running init process is not yet a usable system. Between PID 1 starting and a user seeing a login prompt lies a complex orchestration of:
/home, /var, /tmp, and virtual filesystems need setupThis final page examines how these components work together to produce a fully operational system—whether that means a text console, a graphical desktop, or a headless server awaiting network connections.
By the end of this page, you will understand: the virtual filesystem setup (/proc, /sys, /dev); udev and dynamic device management; network initialization; the systemd target progression; how login prompts and display managers are started; and how to analyze and optimize boot performance.
Before services can run, critical filesystems must be mounted. This happens in stages, starting in initramfs and completing in the main init system.
Stage 1: Initramfs Mounts:
Before switching to the real root filesystem, initramfs sets up:
/proc - Process information pseudo-filesystem
/sys - Sysfs (device/driver hierarchy)
/dev - Device nodes (initially tmpfs, populated by udev/mdev)
/run - Runtime data (tmpfs, doesn't persist)
Stage 2: Real Root Mounting:
The initramfs init script or systemd-in-initramfs mounts the real root filesystem (as specified by root= kernel parameter) and performs switch_root or pivot_root to make it the new /.
Stage 3: Init System Mounts:
The main init system then mounts remaining filesystems according to /etc/fstab:
123456789101112131415161718192021222324252627282930313233
# /etc/fstab - Filesystem table # <file system> <mount point> <type> <options> <dump> <pass> # Root filesystem (mounted by initramfs, verified here)UUID=a1b2c3d4-... / ext4 errors=remount-ro 0 1 # Boot partition (may be ESP for UEFI)UUID=ABCD-1234 /boot/efi vfat umask=0077 0 1 # Swap partitionUUID=e5f6g7h8-... none swap sw 0 0 # Home partition (separate for easy reinstalls)UUID=i9j0k1l2-... /home ext4 defaults 0 2 # Data partition (optional)UUID=m3n4o5p6-... /data xfs defaults,nofail 0 2 # Temporary filesystem (RAM-based)tmpfs /tmp tmpfs defaults,noatime,nosuid,nodev,size=4G 0 0 # Virtual filesystems (usually auto-mounted, but can be explicit)proc /proc proc defaults 0 0sysfs /sys sysfs defaults 0 0devpts /dev/pts devpts gid=5,mode=620 0 0 # Network mounts (with nofail to not block boot)server:/exports /mnt/nfs nfs defaults,nofail,x-systemd.automount 0 0 # fstab columns:# <pass> - fsck order: 0=skip, 1=first (root), 2=second# Options: defaults = rw,suid,dev,exec,auto,nouser,async| Filesystem | Mount Point | Purpose |
|---|---|---|
| proc | /proc | Process info, kernel tunables (/proc/sys), runtime stats |
| sysfs | /sys | Device/driver hierarchy, hotplug events, module parameters |
| devtmpfs | /dev | Device nodes (auto-created by kernel, managed by udev) |
| devpts | /dev/pts | Pseudo-terminal slaves (PTY) |
| tmpfs | /run | Runtime data (PIDs, sockets, locks) - volatile |
| tmpfs | /tmp | Temporary files - optionally RAM-based |
| cgroup2 | /sys/fs/cgroup | Control groups for resource management |
| securityfs | /sys/kernel/security | Security module interfaces (SELinux, AppArmor) |
| efivarfs | /sys/firmware/efi/efivars | UEFI variable access (UEFI systems only) |
systemd generates .mount units from /etc/fstab automatically (systemd-fstab-generator). You can also create explicit .mount units for fine-grained control. Mount units support dependencies (After=, Requires=), ordering, and failure handling that fstab cannot express.
udev (userspace /dev) is the Linux device manager responsible for populating /dev with device nodes, loading firmware, and executing actions when devices are added or removed. It's the bridge between the kernel's device discovery and user-space device access.
How udev Works:
/etc/udev/rules.d/*.rules and /lib/udev/rules.d/*.rules12345678910111213141516171819202122232425262728293031323334353637383940
# udev rule examples - /etc/udev/rules.d/99-custom.rules # Rule format: KEY==value (match), KEY=value (assign)# Rules are processed in filename order (99- runs late) # Set permissions for USB devices (allow non-root access)SUBSYSTEM=="usb", MODE="0666" # Create named symlink for a specific USB device by vendor/product IDSUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \ SYMLINK+="arduino", MODE="0666", GROUP="dialout" # Run a script when USB storage is addedACTION=="add", SUBSYSTEM=="block", \ ENV{ID_USB_DRIVER}=="usb-storage", \ RUN+="/usr/local/bin/usb-backup.sh" # Persistent network interface naming (bypass predictable names)SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:11:22:33:44:55", \ NAME="lan0" # Set I/O scheduler for SSDsACTION=="add|change", SUBSYSTEM=="block", \ ATTR{queue/rotational}=="0", \ ATTR{queue/scheduler}="mq-deadline" # Load firmware for specific deviceACTION=="add", SUBSYSTEM=="firmware", \ ATTR{loading}="-1" # Prevent timeout# (Firmware loading is usually automatic via kernel) # Debug: Log all uevents# ACTION=="*", KERNEL=="*", RUN+="/usr/bin/logger -t udev %k %A" # Key operators:# == Match (for conditions)# != No match# = Assign (replace)# += Append# := Assign final (cannot be changed by later rules)1234567891011121314151617181920212223242526272829303132
# udev administration commands # View device properties (for writing rules)$ udevadm info /dev/sdaP: /devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sdaN: sdaE: DEVNAME=/dev/sdaE: DEVTYPE=diskE: ID_SERIAL=WDC_WD10EZEX-00WN4A0_WD-WMC4T0123456E: ID_MODEL=WDC_WD10EZEX-00WN4A0... # Monitor events in real-time$ udevadm monitorKERNEL[12345.678] add /devices/pci.../block/sdb (block)UDEV [12345.681] add /devices/pci.../block/sdb (block) # Test rules without applying$ udevadm test /sys/class/block/sda # Trigger uevents (replay coldplug)$ udevadm trigger # All devices$ udevadm trigger --subsystem-match=net # Only network devices # Reload rules after editing$ udevadm control --reload-rules # Wait for udev queue to drain$ udevadm settle # Used in scripts to ensure devices are ready # systemd-udevd service$ systemctl status systemd-udevd.serviceModern Linux uses 'predictable' network names based on firmware, topology, or MAC (enp0s31f6 instead of eth0). This prevents interface name changes when hardware is added. To revert to legacy names, add net.ifnames=0 biosdevname=0 to kernel command line, or use udev rules to assign custom names.
Network configuration is typically one of the slower parts of initialization, especially for DHCP which requires round-trip network communication. Different systems use different network management approaches:
Network Configuration Systems:
| System | Used By | Configuration Location |
|---|---|---|
| NetworkManager | Desktop distros (Ubuntu, Fedora) | /etc/NetworkManager/ |
| systemd-networkd | Servers, containers, embedded | /etc/systemd/network/*.network |
| netplan | Ubuntu 18.04+ | /etc/netplan/*.yaml (renders to NM or networkd) |
| ifupdown | Debian (legacy) | /etc/network/interfaces |
| wicked | openSUSE | /etc/sysconfig/network/ |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
# systemd-networkd configuration# /etc/systemd/network/20-wired.network [Match]Name=enp0s* [Network]DHCP=yes# Or static:# Address=192.168.1.100/24# Gateway=192.168.1.1# DNS=8.8.8.8 [DHCP]UseDNS=yesUseNTP=yesRouteMetric=100 --- # netplan configuration (Ubuntu)# /etc/netplan/01-network.yaml network: version: 2 renderer: networkd # or NetworkManager ethernets: enp0s31f6: dhcp4: true # Or static: # addresses: # - 192.168.1.100/24 # gateway4: 192.168.1.1 # nameservers: # addresses: [8.8.8.8, 8.8.4.4] # Apply: sudo netplan apply --- # Traditional /etc/network/interfaces (Debian) auto loiface lo inet loopback auto eth0iface eth0 inet dhcp auto eth1iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0 gateway 192.168.1.1Network Boot Timing:
Network initialization creates boot timing challenges:
systemd's solution:
network.target — Network configuration has been submitted (not necessarily complete)network-online.target — Network is actually reachable (blocks waiting for network)Adding After=network-online.target to a service causes it to wait for full network connectivity, which can delay boot by 30+ seconds if DHCP is slow or network is disconnected. Only use this for services that genuinely need network at startup. Most services should use network.target (or nothing).
systemd boot progresses through a series of targets—synchronization points that group related units. Understanding this progression helps diagnose boot issues and optimize startup.
The Target Chain:
| Target | Purpose | Key Services |
|---|---|---|
| sysinit.target | Early system initialization | udev, tmpfiles, fsck, mount, swap |
| basic.target | Basic system ready, before network | sockets, timers, paths (activation points) |
| network.target | Network configuration submitted | NetworkManager, systemd-networkd |
| network-online.target | Network actually reachable | NetworkManager-wait-online, systemd-networkd-wait-online |
| multi-user.target | Non-graphical multi-user | Most daemons: SSH, cron, web servers |
| graphical.target | Graphical login | Display manager (GDM, SDDM, LightDM) |
| rescue.target | Single-user rescue mode | Minimal system for maintenance |
| emergency.target | Emergency shell | Just root shell, root fs may be read-only |
1234567891011121314151617181920212223242526272829303132333435363738
# Analyzing target progression # View the target dependency chain$ systemctl list-dependencies graphical.targetgraphical.target● ├─accounts-daemon.service● ├─gdm.service● ├─...● └─multi-user.target● ├─cron.service● ├─sshd.service● └─... # View what the default target is$ systemctl get-defaultgraphical.target # Change default target (e.g., for server)$ sudo systemctl set-default multi-user.target # Boot to a specific target once$ sudo systemctl isolate rescue.target # Now$ sudo systemctl reboot --boot-to=rescue # Next boot # From GRUB: add to kernel command line:systemd.unit=rescue.target# Or:systemd.unit=emergency.target# Or for full diagnostic shell before init:init=/bin/bash # List all target types$ systemctl list-units --type=target UNIT LOAD ACTIVE SUB DESCRIPTION basic.target loaded active active Basic System multi-user.target loaded active active Multi-User System graphical.target loaded active active Graphical Interface ...The default target is a symlink at /etc/systemd/system/default.target pointing to the actual target unit. Desktop systems default to graphical.target; servers often default to multi-user.target. The symlink is managed by 'systemctl set-default'.
One of systemd's primary innovations is parallel service startup. Unlike SysVinit's sequential execution, systemd starts services concurrently as soon as their dependencies are satisfied.
How Parallelism Works:
12345678910111213141516171819202122232425262728293031323334
# Analyzing parallel startup # View the critical chain (longest sequential path)$ systemd-analyze critical-chaingraphical.target @7.834s└─gdm.service @7.672s +161ms └─plymouth-quit-wait.service @5.123s +2.548s └─systemd-user-sessions.service @5.098s +20ms └─network.target @5.091s └─NetworkManager.service @3.456s +1.634s └─dbus.service @3.234s +218ms └─basic.target @3.210s ... # The + times are how long that unit took# The @ times are when that unit started after boot began # View startup time overview$ systemd-analyze Startup finished in 4.123s (firmware) + 3.456s (loader) + 2.345s (kernel) + 7.890s (userspace) = 17.814sgraphical.target reached after 7.834s in userspace # List slowest services$ systemd-analyze blame 2.548s plymouth-quit-wait.service 1.634s NetworkManager.service 1.123s docker.service 567ms snapd.service ... # Generate visual boot chart$ systemd-analyze plot > boot.svg# View in browser to see parallel execution visuallyThe Critical Chain:
The critical chain shows the longest path through the dependency graph—the sequence of units that, if any were faster, would reduce total boot time. Optimizing services not on the critical chain has no effect on boot time.
To speed up boot: 1) Identify services on the critical chain with systemd-analyze critical-chain, 2) Determine if slow services can be deferred (socket activation, After=graphical.target), 3) Use Type=notify for services that can signal readiness faster than Type=forking, 4) Disable or mask unnecessary services.
The final visible step of boot is presenting a login interface—either a text console or graphical login screen. Different systems handle this differently.
Text Console Login (getty):
The getty (get tty) program manages virtual console login prompts:
123456789101112131415161718192021222324252627282930
# getty manages console login # systemd uses getty@.service template$ systemctl status getty@tty1.service● getty@tty1.service - Getty on tty1 Active: active (running) Main PID: 1234 (agetty) CGroup: /system.slice/system-getty.slice/getty@tty1.service └─1234 /sbin/agetty -o -p -- \u --noclear tty1 linux # Default: 6 virtual consoles (tty1-tty6), switch with Alt+F1 to Alt+F6# In graphical session, virtual consoles are Ctrl+Alt+F1 to F6# Graphical session typically on tty1 or tty7 # Customize getty behavior$ sudo systemctl edit getty@tty1.service---[Service]ExecStart=ExecStart=-/sbin/agetty --autologin myuser --noclear %I $TERM---# Creates /etc/systemd/system/getty@tty1.service.d/override.conf # Serial console getty$ sudo systemctl enable serial-getty@ttyS0.service# For servers with serial console access # Configure number of gettys: /etc/systemd/logind.confNAutoVTs=6 # Number of virtual terminals to spawnReserveVT=6 # Reserve VT 6 for graphical sessionsGraphical Login (Display Manager):
A display manager provides graphical login:
The display manager is typically defined as the display-manager.service, which is symlinked to the actual implementation.
1234567891011121314151617181920212223242526272829303132
# Display manager configuration # Check which display manager is active$ ls -la /etc/systemd/system/display-manager.servicelrwxrwxrwx 1 root root 35 Jan 15 10:00 /etc/systemd/system/display-manager.service -> /lib/systemd/system/gdm.service # Alternative: check the package$ dpkg -l | grep -E "(gdm|sddm|lightdm)" # Switch display managers (Debian/Ubuntu)$ sudo dpkg-reconfigure gdm3 # Interactive prompt # Manual switch$ sudo systemctl disable gdm$ sudo systemctl enable sddm$ sudo reboot # Display manager configuration files# GDM: /etc/gdm3/custom.conf# SDDM: /etc/sddm.conf, /etc/sddm.conf.d/*.conf# LightDM: /etc/lightdm/lightdm.conf # Auto-login configuration (GDM)# /etc/gdm3/custom.conf[daemon]AutomaticLoginEnable=trueAutomaticLogin=username # (SDDM) /etc/sddm.conf.d/autologin.conf[Autologin]User=usernameSession=plasmaModern display managers can start Wayland or X11 sessions. GDM defaults to Wayland for GNOME Shell. The session type is selected at the login screen. You can check your session type with: echo $XDG_SESSION_TYPE (returns 'wayland' or 'x11').
After successful authentication, the user's session is created. systemd manages user sessions through systemd-logind and per-user systemd instances.
User Session Components:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
# User session management # List active sessions$ loginctl list-sessionsSESSION UID USER SEAT TTY 1 1000 alice seat0 tty1 2 1001 bob pts/0 # Session details$ loginctl show-session 1Id=1User=1000Name=aliceSeat=seat0TTY=tty1Remote=noService=gdm-passwordType=x11Class=userActive=yesState=active... # List logged-in users$ loginctl list-users # User systemd instance$ systemctl --user status● alice-pc State: running Jobs: 0 queued Failed: 0 units Since: Mon 2024-01-15 09:00:00 UTC; 2h 30min ago # User services$ systemctl --user list-units UNIT LOAD ACTIVE SUB DESCRIPTION dbus.socket loaded active running D-Bus User Message Bus Socket pipewire.socket loaded active running PipeWire Multimedia System Socket ... # Enable a user service (runs at login)$ systemctl --user enable syncthing.service # XDG autostart entries$ ls ~/.config/autostart/discord.desktop slack.desktop ... # Environment at session start: ~/.pam_environment or systemd user env$ systemctl --user show-environmentShell Session Initialization Order:
For interactive login shells (SSH, console login):
1. /etc/profile
2. ~/.bash_profile OR ~/.bash_login OR ~/.profile (first found)
3. (For interactive non-login: ~/.bashrc)
4. (At logout: ~/.bash_logout)
For graphical sessions, shell initialization depends on the display manager and session type—many set up environment before the shell runs.
To run user services without being logged in (e.g., for background tasks), enable 'lingering': sudo loginctl enable-linger username. This starts the user's systemd instance at boot and keeps it running after logout.
Understanding what slows down boot is essential for optimization. Modern tools make it possible to precisely analyze and improve boot times.
Analysis Process:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
# Boot Performance Analysis and Optimization # 1. Get overall timing$ systemd-analyzeStartup finished in 3.2s (firmware) + 2.1s (loader) + 1.8s (kernel) + 8.5s (userspace) = 15.6sgraphical.target reached after 8.5s in userspace # 2. Identify slow services$ systemd-analyze blame | head -20 3.456s docker.service 2.345s snapd.service 1.234s NetworkManager-wait-online.service 1.100s plymouth-quit-wait.service 890ms systemd-udev-settle.service ... # 3. View critical chain$ systemd-analyze critical-chain graphical.targetgraphical.target @8.5s└─gdm.service @8.2s +300ms └─plymouth-quit-wait.service @6.9s +1.1s └─systemd-user-sessions.service @6.8s +50ms └─network.target @6.7s # 4. Generate visual plot$ systemd-analyze plot > boot.svg # 5. Common optimizations: # Disable unused services$ sudo systemctl disable bluetooth.service # If no Bluetooth$ sudo systemctl mask lvm2-monitor.service # If not using LVM # Convert to socket activation$ sudo systemctl disable docker.service$ sudo systemctl enable docker.socket # Start on first use # Remove slow "wait" services if not needed$ sudo systemctl mask NetworkManager-wait-online.service$ sudo systemctl mask systemd-networkd-wait-online.service# WARNING: Only if no services truly need network at boot # Use systemd-readahead (if available) for disk prefetch$ sudo systemctl enable systemd-readahead-collect.service$ sudo systemctl enable systemd-readahead-replay.service # Profile boot with bootchart (alternative analysis)# Add to kernel command line: init=/usr/lib/systemd/systemd-bootchart| Symptom | Cause | Solution |
|---|---|---|
| Long firmware time | Slow POST, device init | BIOS settings: fast boot, disable unused ports |
| Slow kernel | Loading many modules | Compile critical drivers into kernel |
| NetworkManager-wait-online | Waiting for DHCP | Mask if network not needed early, or use static IP |
| plymouth-quit-wait | Waiting for boot animation | Consider disabling Plymouth (plymouth.enable=0) |
| docker.service | Cold starts all containers | Use socket activation, delay start |
| snapd.service | Refreshing snaps | Disable if not using snaps (mask) |
| systemd-udev-settle | Waiting for device events | Usually triggered by something else needing it |
Always test boot after making changes. Masking the wrong service can cause boot failures. Use a VM or have recovery media ready. The 'mask' operation is reversible with 'unmask', but you need to be able to boot to run that command.
Shutdown is essentially boot in reverse—services must stop gracefully, filesystems must sync and unmount, and the kernel must prepare for power-off or reboot. Incorrect shutdown can cause data loss.
Shutdown Sequence:
1234567891011121314151617181920212223242526272829303132333435363738
# Shutdown and reboot commands # Immediate shutdown$ sudo systemctl poweroff$ sudo shutdown -h now$ sudo poweroff # Immediate reboot$ sudo systemctl reboot$ sudo shutdown -r now$ sudo reboot # Scheduled shutdown$ sudo shutdown -h +10 # 10 minutes$ sudo shutdown -h 23:00 # At 11 PM$ sudo shutdown -c # Cancel scheduled shutdown # Custom message$ sudo shutdown -r +5 "Rebooting for updates, please save work" # Halt (stop but don't power off - rare)$ sudo systemctl halt # Emergency: bypass graceful shutdown (DANGER!)$ sudo systemctl --force poweroff # Skip services$ sudo systemctl --force --force poweroff # Immediate (like reset button)# Or: Alt+SysRq+O (power off), Alt+SysRq+B (reboot) # Boot to specific target after reboot$ sudo systemctl reboot --boot-to=rescue.target # View what blocked last shutdown$ journalctl -b -1 -u systemd-shutdown.service # Configure shutdown behavior# /etc/systemd/system.confDefaultTimeoutStopSec=90s # Max wait for services to stop# Individual services can override with TimeoutStopSec= in unit fileBy default, systemd waits 90 seconds for each service to stop gracefully before sending SIGKILL. Services can specify TimeoutStopSec= in their unit file. Long-running data operations may need increased timeouts (e.g., databases). Setting TimeoutStopSec=infinity waits forever (use carefully).
We've now traced the complete path from power button to login prompt, covering every major subsystem involved in bringing a modern Linux system to operational state.
Module Complete:
You've now mastered the complete boot process—from the first electrical signals after power-on to a fully operational system ready for user interaction. This knowledge is fundamental for system administration, troubleshooting, performance optimization, and understanding how modern operating systems work at a deep level.
Congratulations! You've completed the Boot Process module. You now understand: firmware (BIOS/UEFI) and Secure Boot; bootloaders (GRUB) and kernel handoff; kernel loading, decompression, and initialization; init systems (SysVinit, systemd) and service management; and full system initialization through login. This comprehensive knowledge prepares you for advanced system administration, debugging boot issues, and understanding the foundation upon which all OS operations depend.