Loading learning content...
When the bootloader executes its final boot command, something remarkable happens. Control transfers from the bootloader—a program designed only to load other programs—to the operating system kernel, the most privileged software that will run on the system.
In that instant, the kernel receives a system in a very specific state: CPU in protected or long mode, memory initialized but unmapped, devices waiting to be enumerated, and a compressed kernel image ready to be decompressed and executed. From this starting point, the kernel must:
This transformation from passive code loaded in memory to an active, running operating system is the kernel loading process—one of the most carefully orchestrated sequences in all of computing.
By the end of this page, you will understand: the kernel entry point and early execution; kernel decompression mechanics; memory initialization and page table setup; CPU and multiprocessor initialization; device and subsystem initialization; and how the kernel prepares to execute user-space code.
The bootloader transfers control to the kernel at a well-defined entry point. The exact mechanism differs between boot protocols (BIOS vs UEFI, 32-bit vs 64-bit), but the concept is universal: there's a specific address where kernel execution begins.
x86 BIOS Boot Entry (32-bit):
For legacy BIOS boot, the bootloader sets up the CPU in 32-bit protected mode and jumps to the kernel's startup_32 entry point:
Boot Flow:
bootloader → arch/x86/boot/compressed/head_32.S:startup_32
x86_64 BIOS Boot Entry (64-bit via 32-bit bootloader):
For 64-bit kernels loaded by 32-bit bootloaders, there's an intermediate step:
Boot Flow:
bootloader (32-bit) → startup_32 → transition to long mode → startup_64
x86_64 EFI Boot Entry:
For UEFI boot with EFI Stub, the kernel is loaded as an EFI application:
Boot Flow:
UEFI firmware → efi_pe_entry → efi_main → startup_64
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
# Linux kernel entry point (x86_64, simplified)# arch/x86/boot/compressed/head_64.S .code32 # We start in 32-bit modeSYM_FUNC_START(startup_32) # Clear direction flag - string ops should increment cld # Calculate the offset we are loaded at # (kernel may not be at linked address) call 1f1: popl %ebp subl $1b, %ebp # ebp = load offset # Check if we need to enable PAE (for > 4GB addressing) # Set up temporary page tables in 32-bit mode # Load the 64-bit GDT leal gdt64(%ebp), %eax lgdt (%eax) # Enable PAE and long mode setup... # Switch to 64-bit mode pushl $__KERNEL_CS leal startup_64(%ebp), %eax pushl %eax lretl # Far return - loads CS, enters long mode SYM_FUNC_END(startup_32) .code64SYM_FUNC_START(startup_64) # Now in 64-bit long mode # Set up 64-bit segment registers xorl %eax, %eax movl %eax, %ds movl %eax, %es movl %eax, %ss movl %eax, %fs movl %eax, %gs # Set up BSS (zero-initialized data) # Decompress the kernel call extract_kernel # Returns entry point in %rax # Jump to the decompressed kernel jmp *%rax # → arch/x86/kernel/head_64.S:startup_64 SYM_FUNC_END(startup_64)Linux kernels are typically compressed to reduce boot time (smaller image loads faster from disk) and save space. Common compression formats: gzip, bzip2, LZMA, LZ4, zstd. The kernel includes a decompression stub that runs before the main kernel code. Modern fast storage makes decompression the bottleneck—fast algorithms like LZ4 trade compression ratio for speed.
The Boot Parameters:
The entry point code receives critical information from the bootloader:
boot_params structure containing:
This structure is the primary communication channel between bootloader and kernel. The kernel preserves it and references it throughout initialization.
The kernel image loaded by the bootloader is typically compressed. The first significant work the early kernel code performs is decompressing itself—a self-extracting operation where the decompression code runs from the compressed image and extracts the full kernel.
Kernel Image Formats:
| Image | Content | Use |
|---|---|---|
| vmlinux | Uncompressed ELF kernel image | Debugging, embedded systems with fast storage |
| zImage | Compressed kernel for older systems | Legacy, limited to 512KB |
| bzImage | Compressed kernel, can be larger | Standard for x86 BIOS boot |
| vmlinuz | Symbolic link to bzImage | What bootloaders typically load |
| Image | Uncompressed binary (ARM) | ARM64 EFI requires this format |
| Image.gz | Gzip compressed (ARM) | ARM with decompressor in boot wrapper |
The Decompression Process:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
// Kernel decompression (simplified)// arch/x86/boot/compressed/misc.c asmlinkage __visible void *extract_kernel( void *rmode, // Real mode pointer (boot_params) memptr heap, // Heap for decompression workspace unsigned char *input_data, // Compressed kernel data unsigned long input_len, // Compressed size unsigned char *output, // Output buffer unsigned long output_len // Expected decompressed size) { const unsigned long kernel_total_size = VO__end - VO__text; unsigned long virt_addr = LOAD_PHYSICAL_ADDR; // Usually 0x1000000 (16MB) // Initialize boot parameters access boot_params = rmode; // Initialize heap free_mem_ptr = heap; free_mem_end_ptr = heap + BOOT_HEAP_SIZE; // Initialize console for early printing console_init(); debug_putstr("Decompressing Linux..."); // Validate output buffers if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1)) error("Destination not properly aligned"); // Perform the decompression // __decompress is set at compile time: gunzip, lz4, etc. __decompress(input_data, input_len, NULL, NULL, output, output_len, NULL, error); debug_putstr("done.\nBooting the kernel.\n"); // Handle kernel address randomization (KASLR) return pick_random_location(output, output_len);} // Available decompression algorithms (selected at build time):// - lib/decompress_inflate.c (gzip)// - lib/decompress_bunzip2.c (bzip2) // - lib/decompress_unlzma.c (lzma/xz)// - lib/decompress_unlz4.c (lz4)// - lib/decompress_unzstd.c (zstd)Modern kernels randomize their load address at boot to make exploits harder. pick_random_location() uses entropy from hardware (RDRAND) or boot parameters to choose a randomized base address. The kernel must be PIE (Position Independent Executable) compatible, with all absolute addresses resolved at runtime.
After decompression, the kernel must establish proper memory management. This happens in stages: first, minimal identity-mapped page tables enable basic operation; then, proper virtual memory with the full kernel mapping is established.
The Memory Map Discovery:
The kernel needs to know what physical memory exists and what it can use. On x86, this information comes from the E820 memory map provided by BIOS/UEFI through the bootloader:
123456789101112131415161718
# E820 Memory Map (from dmesg) [ 0.000000] BIOS-provided physical RAM map:[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable[ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved[ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bffdffff] usable[ 0.000000] BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved[ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved[ 0.000000] BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved[ 0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable # Memory types:# usable - Available for kernel/user processes# reserved - Hardware mapped or BIOS reserved# ACPI data/NVS - ACPI tables, must be preserved# unusable - Memory errors detectedPage Table Setup:
The decompressed kernel at arch/x86/kernel/head_64.S:startup_64 sets up proper page tables:
Identity Mapping — Linear addresses equal physical addresses for the first few MB. Required because we're running at physical addresses but will switch to virtual.
Kernel Mapping — The kernel is mapped at 0xffffffff80000000 (32-bit compatible) or 0xffffffff xxxxxxxx (64-bit direct). This is the kernel text region.
Direct Mapping — All physical memory is mapped linearly starting at PAGE_OFFSET (typically 0xffff880000000000). This allows the kernel to access any physical address via simple offset math.
| Region | Starting Address | Purpose |
|---|---|---|
| User space | 0x0000000000000000 | User process virtual addresses (128TB) |
| Non-canonical hole | 0x0000800000000000 | Invalid addresses (hardware enforced) |
| Guard hole | 0xffff800000000000 | Protection gap |
| Direct mapping | 0xffff888000000000 | All physical RAM (up to 64TB) |
| vmalloc/ioremap | 0xffffc90000000000 | Dynamically mapped pages |
| vmemmap | 0xffffea0000000000 | struct page array |
| Kernel text | 0xffffffff80000000 | Kernel code and read-only data |
| Modules | 0xffffffffa0000000 | Loadable kernel modules |
12345678910111213141516171819202122232425
// Early page table initialization (conceptual, simplified) // x86_64 uses 4-level page tables (or 5-level with LA57)// PGD -> P4D -> PUD -> PMD -> PTE -> Page // Initial page table (built in assembly, before mm is up)// arch/x86/kernel/head_64.S SYM_DATA_START_PAGE_ALIGNED(init_top_pgt) .quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC .org init_top_pgt + PGD_INDEX(__START_KERNEL_map) * 8 .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENCSYM_DATA_END(init_top_pgt) // Enable paging (in assembly)// ...movq $init_top_pgt, %raxmovq %rax, %cr3 // Load page table base register// ...movq %cr0, %raxorq $CR0_PG, %rax // Enable paging bit movq %rax, %cr0 // Paging is now ON // After this point, all memory accesses go through page tables// %rip now holds virtual addresses, not physicalWhen enabling paging, instruction pointer (RIP) contains a physical address, but the next instruction will be fetched using virtual addresses. The kernel handles this by first executing from identity-mapped regions (where virtual = physical), then jumping to the proper kernel virtual address. This trampoline prevents a crash at the moment paging enables.
After architecture-specific setup completes, the kernel jumps to start_kernel()—the architecture-independent C entry point. This function, located in init/main.c, orchestrates all remaining kernel initialization.
start_kernel() is where Linux truly begins.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
// start_kernel() - The heart of kernel initialization// init/main.c (heavily simplified) asmlinkage __visible void __init __no_sanitize_address start_kernel(void){ char *command_line; char *after_dashes; // Prevent early boot debug hooks from firing set_task_stack_end_magic(&init_task); // Initialize CPU-local areas smp_setup_processor_id(); // Enable basic debugging infrastructure debug_objects_early_init(); // Initialize memory allocator (early boot allocator) boot_init_stack_canary(); cgroup_init_early(); local_irq_disable(); // Interrupts off until ready // Boot CPU initialization boot_cpu_init(); page_address_init(); // Print kernel banner pr_notice("%s", linux_banner); // "Linux version 6.2.0-39-generic (builder@...) ..." // Architecture-specific setup setup_arch(&command_line); // Memory zones and allocator mm_init_cpumask(&init_mm); setup_command_line(command_line); setup_nr_cpu_ids(); setup_per_cpu_areas(); // Boot CPU online smp_prepare_boot_cpu(); boot_cpu_hotplug_init(); // Build memory zone lists build_all_zonelists(NULL); page_alloc_init(); // Parse kernel command line pr_notice("Kernel command line: %s", boot_command_line); parse_early_param(); after_dashes = parse_args(...); // Jump into subsystem initialization... // (continued below)}The Initialization Sequence:
start_kernel() calls initialization functions in a carefully ordered sequence. Dependencies between subsystems require this specific ordering:
| Function | Subsystem | What It Does |
|---|---|---|
| setup_arch() | Architecture | CPU feature detection, memory setup, ACPI/device tree parsing |
| mm_init() | Memory | Initialize page allocator, slab allocator, vmalloc |
| sched_init() | Scheduler | Initialize runqueues, create idle task |
| init_IRQ() | Interrupts | Set up interrupt descriptor table, APIC |
| time_init() | Timekeeping | Initialize clocks, jiffies, timers |
| console_init() | Console | Initialize console for printk output |
| vfs_caches_init() | VFS | Initialize dentry cache, inode cache |
| fork_init() | Processes | Initialize task structures, fork limits |
| rest_init() | Final | Create init process, spawn kthreadd |
Throughout start_kernel(), the kernel runs as init_task (PID 0)—a static task structure compiled into the kernel. This "swapper" task becomes the idle loop after user space starts. It's the only task not created by fork(); it exists from the moment the kernel starts executing.
Modern systems have multiple CPU cores, each requiring initialization. The boot CPU (BSP - Bootstrap Processor) handles early initialization and then brings other CPUs (APs - Application Processors) online.
Boot CPU Initialization:
The BSP performs:
12345678910111213141516171819202122
# CPU Initialization Messages (from dmesg) [ 0.000000] x86/fpu: x87 FPU will use FXSAVE[ 0.000000] signal: max sigframe size: 1040[ 0.000000] BIOS-provided physical RAM map:...[ 0.000000] CPU MTRRs all blank - virtualized system[ 0.000000] x86/PAT: MTRRs disabled, skipping PAT init too.[ 0.000000] CPU: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz[ 0.000000] CPU: Physical Processor ID: 0[ 0.000000] CPU: Processor Core ID: 0[ 0.000001] Spectre V1 : Mitigation: usercopy/swapgs barriers[ 0.000001] Spectre V2 : Mitigation: Retpolines[ 0.000001] Speculative Store Bypass: Mitigation: SSB disabled[ 0.000001] TAA: Vulnerable: Clear CPU buffers attempted...[ 0.034567] smp: Bringing up secondary CPUs ...[ 0.034712] x86: Booting SMP configuration:[ 0.034713] .... node #0, CPUs: #1 #2 #3 #4 #5[ 0.041234] smp: Brought up 1 node, 6 CPUs[ 0.041235] smpboot: Max logical packages: 1[ 0.041236] smpboot: Total of 6 processors activatedSMP Initialization (Bringing Up Secondary CPUs):
The BSP brings up Application Processors through this sequence:
Modern kernels support CPU hotplug—CPUs can be brought online and offline at runtime. This is essential for virtualization, power management, and testing. Check /sys/devices/system/cpu/cpuN/online to control CPU state. Echo 0 to offline, 1 to online a CPU.
Before devices can function, the kernel must establish interrupt handling infrastructure. On x86, this means setting up the Interrupt Descriptor Table (IDT) and configuring the APIC (Advanced Programmable Interrupt Controller).
IDT Setup:
The IDT maps interrupt vectors (0-255) to handler functions:
| Vector Range | Type | Examples |
|---|---|---|
| 0-31 | CPU Exceptions | 0: Divide Error, 6: Invalid Opcode, 13: General Protection, 14: Page Fault |
| 32-47 | Legacy IRQs (if remapped) | Timer, Keyboard, Cascade, COM ports |
| 48-127 | Device IRQs | Storage, Network, USB, GPU |
| 128 | System Call (int 0x80) | Legacy Linux system call interface |
| 129-238 | Additional Device IRQs | MSI/MSI-X interrupts |
| 239 | Local APIC timer | Per-CPU timer interrupt |
| 251 | TLB shootdown | IPI for TLB flush |
| 252 | Call function | IPI for cross-CPU function calls |
| 253 | Reschedule | IPI to trigger scheduler |
| 254 | Error | APIC error interrupt |
| 255 | Spurious | Spurious interrupt handler |
12345678910111213141516171819202122232425262728293031323334353637383940
// IDT Setup (simplified)// arch/x86/kernel/idt.c // IDT entry structurestruct idt_entry { u16 offset_low; // Handler address bits 0-15 u16 segment; // Kernel code segment selector u8 ist; // Interrupt Stack Table index (0 = no switch) u8 type_attr; // Gate type and attributes u16 offset_mid; // Handler address bits 16-31 u32 offset_high; // Handler address bits 32-63 u32 reserved;}; // Install an IDT entrystatic inline void idt_set_gate(int vector, void *handler, u8 ist, u8 type) { struct idt_entry *entry = &idt_table[vector]; unsigned long addr = (unsigned long)handler; entry->offset_low = addr & 0xFFFF; entry->segment = KERNEL_CS; entry->ist = ist; entry->type_attr = type; entry->offset_mid = (addr >> 16) & 0xFFFF; entry->offset_high = (addr >> 32) & 0xFFFFFFFF;} // Exception handlers are set up in idt_setup_traps()void __init idt_setup_traps(void) { idt_set_gate(0, divide_error, 0, GATE_INTERRUPT); idt_set_gate(1, debug, 0, GATE_INTERRUPT); idt_set_gate(2, nmi, 1, GATE_INTERRUPT); // IST=1 idt_set_gate(3, int3, 0, GATE_INTERRUPT); idt_set_gate(6, invalid_op, 0, GATE_INTERRUPT); idt_set_gate(8, double_fault, 2, GATE_INTERRUPT); // IST=2 idt_set_gate(13, general_protection, 0, GATE_INTERRUPT); idt_set_gate(14, page_fault, 0, GATE_INTERRUPT); // ... and so on}APIC Initialization:
The legacy 8259 PIC is disabled, and the APIC (Advanced PIC) is configured:
The kernel uses ACPI tables (MADT - Multiple APIC Description Table) to discover APIC configuration.
Interrupt distribution across CPUs significantly impacts performance. High-rate interrupts (network, NVMe) should be spread across cores. The irqbalance daemon dynamically adjusts IRQ affinities, or administrators can manually set /proc/irq/N/smp_affinity. On NUMA systems, binding IRQs to cores near the device's NUMA node improves performance.
With memory management, scheduling, and interrupt infrastructure in place, the kernel can discover and initialize devices. Device discovery happens through multiple mechanisms depending on the bus type:
Discovery Mechanisms:
| Bus Type | Discovery Method | Description |
|---|---|---|
| PCI/PCIe | Configuration space scan | Kernel reads vendor/device IDs from config space at each bus:device:function |
| USB | Enumeration protocol | Host controller queries attached devices through standard USB descriptors |
| ACPI | DSDT/SSDT tables | Firmware-provided tables describe platform devices, power management |
| Device Tree | DTB parsing (ARM/embedded) | Binary tree structure describes hardware from bootloader |
| ISA | Probe/legacy | Try known I/O ports and IRQs (legacy, mostly extinct) |
The initcall Mechanism:
Kernel subsystems and drivers register initialization functions through the initcall mechanism. These functions are grouped by priority level and called in order during boot:
1234567891011121314151617181920212223242526272829303132
// Initcall levels (include/linux/init.h) // Early levels - called from start_kernel() before rest_init()#define early_initcall(fn) __define_initcall(fn, early)#define pure_initcall(fn) __define_initcall(fn, 0)#define core_initcall(fn) __define_initcall(fn, 1)#define postcore_initcall(fn) __define_initcall(fn, 2)#define arch_initcall(fn) __define_initcall(fn, 3)#define subsys_initcall(fn) __define_initcall(fn, 4)#define fs_initcall(fn) __define_initcall(fn, 5) // Late levels - called from do_initcalls() after rest_init()#define rootfs_initcall(fn) __define_initcall(fn, rootfs)#define device_initcall(fn) __define_initcall(fn, 6)#define late_initcall(fn) __define_initcall(fn, 7) // Most drivers use module_init() which maps to device_initcall()#define module_init(fn) device_initcall(fn) // Example: PCI subsystem initializationstatic int __init pci_driver_init(void){ return bus_register(&pci_bus_type);}postcore_initcall(pci_driver_init); // Example: A network driverstatic int __init e1000_init_module(void){ return pci_register_driver(&e1000_driver);}module_init(e1000_init_module);Initcall ordering bugs are a common source of boot issues. If a driver tries to use a subsystem that isn't yet initialized (wrong initcall level), the kernel panics or the device fails silently. Module reloading often 'fixes' the issue since subsystems are ready on second attempt. Check initcall levels when debugging driver init failures.
At the end of start_kernel(), a single function call changes everything: rest_init(). This function creates the first kernel threads and, crucially, the first user-space process.
rest_init() accomplishes three critical tasks:
12345678910111213141516171819202122232425262728
// rest_init() - Creates the first real processes// init/main.c noinline void __ref rest_init(void){ struct task_struct *tsk; int pid; // Create the kthreadd task (PID 2) // kthreadd is the parent of all kernel threads rcu_scheduler_starting(); pid = kernel_thread(kthreadd, NULL, CLONE_FS | CLONE_FILES); kthreadd_task = find_task_by_pid_ns(pid, &init_pid_ns); // Create kernel_init thread (will become PID 1) // This executes kernel_init() then tries to exec /sbin/init pid = kernel_thread(kernel_init, NULL, CLONE_FS); // Become the idle task for CPU 0 // From here, the boot CPU just runs idle loop init_idle_bootup_task(current); // Complete scheduler initialization schedule_preempt_disabled(); // Call into the idle loop - never returns cpu_startup_entry(CPUHP_ONLINE);}| PID | Name | Role | Created By |
|---|---|---|---|
| 0 | swapper (idle) | Boot task, becomes idle loop | Statically defined in kernel, never forked |
| 1 | init (systemd) | First user process, ancestor of all user processes | kernel_thread(kernel_init) → exec(/sbin/init) |
| 2 | kthreadd | Kernel thread daemon, parent of all kernel threads | kernel_thread(kthreadd) |
kernel_init() — The Bridge to User Space:
The kernel_init() function (running as PID 1 in kernel mode) performs final initialization then executes the init program:
123456789101112131415161718192021222324252627282930313233343536373839404142434445
// kernel_init() - Final kernel init, then exec to user space// init/main.c static int __ref kernel_init(void *unused){ int ret; // Wait for kthreadd to be ready wait_for_completion(&kthreadd_done); // Run all remaining initcalls kernel_init_freeable(); // Free init memory (memory marked __init can be released) free_initmem(); mark_readonly(); // If ramdisk_execute_command was set (rdinit=), try that first if (ramdisk_execute_command) { ret = run_init_process(ramdisk_execute_command); if (!ret) return 0; pr_err("Failed to execute %s (error %d)\n", ramdisk_execute_command, ret); } // If init= was specified on command line, try that if (execute_command) { ret = run_init_process(execute_command); if (!ret) return 0; panic("Requested init %s failed (error %d).", execute_command, ret); } // Try standard init locations if (!try_to_run_init_process("/sbin/init") || !try_to_run_init_process("/etc/init") || !try_to_run_init_process("/bin/init") || !try_to_run_init_process("/bin/sh")) return 0; panic("No working init found. " "Try passing init= option to kernel.");}When kernel_init() calls run_init_process(), it executes execve() to replace its kernel code with the /sbin/init binary. The task keeps PID 1, but now runs user-space code (systemd, SysVinit, or another init system). This is the precise moment the system transitions from kernel-only to kernel+user-space operation.
When the boot process fails, debugging can be challenging—errors occur before normal logging is available. Linux provides several tools and techniques for diagnosing boot issues.
Kernel Command Line Debug Options:
| Parameter | Effect | When to Use |
|---|---|---|
| debug | Enable kernel debug messages | Need more detail in dmesg |
| earlyprintk=vga | Print messages to VGA before console init | System hangs before console works |
| earlyprintk=serial,ttyS0,115200 | Print messages to serial port | Headless systems, VMs |
| initcall_debug | Print every initcall as it runs | Freeze during driver initialization |
| ignore_loglevel | Print all messages regardless of level | Important messages being filtered |
| no_console_suspend | Keep console active during suspend | Debug suspend/resume issues |
| pause_on_oops=N | Pause N seconds on oops (don't reboot) | Capture oops messages |
| panic=N | Reboot N seconds after panic (0=never) | Need time to read panic message |
1234567891011121314151617181920212223242526272829303132
# Boot Debugging Techniques # 1. Add debug parameters via GRUB# Press 'e' at GRUB menu, add to linux line:linux ... debug initcall_debug ignore_loglevel # 2. View boot messages after boot$ dmesg | less$ journalctl -b # systemd: current boot$ journalctl -b -1 # Previous boot (if stored) # 3. Analyze initcall timing$ dmesg | grep "initcall.*returned"[ 0.134567] initcall pci_driver_init+0x0/0x20 returned 0 after 567 usecs[ 0.145678] initcall acpi_init+0x0/0x90 returned 0 after 12345 usecs # 4. Boot chart (graphical boot analysis)$ sudo systemd-analyze plot > boot.svg$ sudo systemd-analyze blame # List slowest units # 5. Serial console debugging (add to kernel command line)console=ttyS0,115200n8 console=tty0 # 6. Netconsole - send dmesg over networknetconsole=@192.168.1.10/eth0,6666@192.168.1.20/00:11:22:33:44:55 # 7. Debug with qemu (add -s flag for gdb)$ qemu-system-x86_64 -kernel vmlinuz -initrd initrd.img -s -S$ gdb vmlinux(gdb) target remote :1234(gdb) break start_kernel(gdb) continueIf the system is hung but kernel is running, Magic SysRq may help. Press Alt+SysRq (Print Screen) + letter: 's' syncs disks, 'b' reboots immediately, 'c' crashes (for debug), 't' dumps tasks. Enable with: echo 1 > /proc/sys/kernel/sysrq. The classic safe reboot sequence is: REISUB (unRaw, tErminate, kIll, Sync, Unmount, reBoot).
The kernel loading process transforms a compressed binary into a fully operational operating system kernel. Every step is precisely orchestrated, from decompression through user-space handoff.
What's Next:
The kernel has created PID 1 and executed /sbin/init. But what happens next? The init process takes over responsibility for bringing up user space—starting services, managing runlevels or targets, and becoming the ancestor of all user processes. The next page explores how init systems (SysVinit, systemd, OpenRC) orchestrate user-space initialization.
You now understand kernel loading: the entry point mechanics, decompression, memory initialization, the start_kernel() sequence, CPU and interrupt setup, device discovery via initcalls, and the creation of the first processes. When kernel_init() execs /sbin/init, the kernel's initialization work is complete—user space takes over.