Loading content...
Unix presents users with a single, unified directory tree starting from /. Yet this tree is actually a composite assembled from many independent file systems—root on an NVMe SSD, /home on a separate partition, /mnt/usb on a flash drive, /proc generated dynamically by the kernel. The process that connects these independent file systems into one coherent hierarchy is called mounting.
Mounting is deceptively simple from a user perspective: run mount /dev/sdb1 /mnt/backup and suddenly files from that partition appear at /mnt/backup. But beneath this simplicity lies sophisticated kernel machinery—mount points, mount namespaces, bind mounts, recursive mounts, and complex visibility rules.
This page dissects the mounting mechanism: what happens when you mount a file system, how mount points work, how the kernel tracks mounts, and advanced mounting features used in modern containerized environments.
By the end of this page, you will understand mount points conceptually and structurally, the mount() system call interface, how VFS implements mount point traversal, bind mounts and mount propagation, mount namespaces for containerization, and practical mounting scenarios.
Mounting is the process of making a file system's contents accessible at a specific location in the directory hierarchy. The location where a file system is attached is called the mount point.
Key Concepts:
Mount Point: A directory (which must already exist) where an external file system's root becomes visible. Before mounting, /mnt/usb might be an empty directory on the root filesystem. After mounting a USB drive there, /mnt/usb shows the USB drive's contents.
Overlay Behavior: Mounting overlays the existing directory. Any contents of the mount point directory before mounting are temporarily hidden (but not deleted). Unmounting reveals them again.
Root File System: At boot, the kernel mounts the root file system at /. All other file systems are mounted relative to this root.
Mount Table: The kernel maintains a data structure tracking all mounts, their mount points, options, and relationships.
Analogy:
Think of mounting like connecting a power strip to a wall outlet. The wall outlet (mount point) is an established point in the infrastructure. The power strip (file system) has its own set of outlets. Once connected, devices plugged into the power strip are effectively connected to the same electrical system—they work transparently through the connection point.
Practical Example:
# Before mounting
$ ls /mnt/backup
# (empty or contains placeholder files)
# Mount an ext4 partition
$ sudo mount /dev/sdb1 /mnt/backup
# After mounting
$ ls /mnt/backup
documents/ photos/ videos/ important.txt
# Unmount
$ sudo umount /mnt/backup
# After unmounting, original contents (if any) reappear
$ ls /mnt/backup
# (back to original state)
You can only mount onto directories, not regular files (with one exception: loop mounts of file-backed devices). The mount point directory should exist and ideally be empty—any existing contents become inaccessible while the mount is active.
The mount() system call is the kernel interface for mounting file systems. It's surprisingly flexible, handling everything from simple disk mounts to complex bind mount configurations.
System Call Signature:
int mount(const char *source, const char *target,
const char *filesystemtype, unsigned long mountflags,
const void *data);
Parameters:
| Parameter | Description | Example |
|---|---|---|
source | Device or data source | /dev/sda1, server:/export, none |
target | Mount point path | /mnt/disk, /home |
filesystemtype | File system type | ext4, nfs, tmpfs, proc |
mountflags | Mount options (bitmask) | MS_RDONLY, MS_NOEXEC |
data | FS-specific options | "size=1G" for tmpfs |
12345678910111213141516171819202122
/* Mount flags (from <sys/mount.h>) */ #define MS_RDONLY 1 /* Mount read-only */#define MS_NOSUID 2 /* Ignore setuid bits */#define MS_NODEV 4 /* Disallow device special files */#define MS_NOEXEC 8 /* Disallow program execution */#define MS_SYNCHRONOUS 16 /* Writes are synced immediately */#define MS_REMOUNT 32 /* Alter existing mount */#define MS_MANDLOCK 64 /* Allow mandatory locks */#define MS_DIRSYNC 128 /* Directory mods are sync */#define MS_NOATIME 1024 /* Don't update access times */#define MS_NODIRATIME 2048 /* Don't update dir access times */#define MS_BIND 4096 /* Bind mount */#define MS_MOVE 8192 /* Move a mount point */#define MS_REC 16384 /* Recursive mount operations */#define MS_SILENT 32768 /* Suppress mount messages */#define MS_POSIXACL (1<<16) /* VFS supports POSIX ACLs */#define MS_PRIVATE (1<<18) /* Private mount */#define MS_SLAVE (1<<19) /* Slave mount */#define MS_SHARED (1<<20) /* Shared mount */#define MS_RELATIME (1<<21) /* Update atime relative to mtime */#define MS_LAZYTIME (1<<25) /* Update times lazily */Common Use Cases:
// Mount ext4 partition read-only
mount("/dev/sdb1", "/mnt/backup", "ext4", MS_RDONLY, NULL);
// Mount tmpfs with 1GB size limit
mount("tmpfs", "/tmp", "tmpfs", 0, "size=1G,mode=1777");
// Mount proc filesystem
mount("proc", "/proc", "proc", 0, NULL);
// Bind mount (make directory appear at another location)
mount("/home/user/docs", "/var/www/docs", NULL, MS_BIND, NULL);
// Remount as read-only
mount("/dev/sda1", "/", NULL, MS_REMOUNT | MS_RDONLY, NULL);
// Mount NFS export
mount("server:/export", "/mnt/nfs", "nfs", 0,
"vers=4,sec=krb5");
The userspace 'mount' command wraps the mount() system call, parsing /etc/fstab, resolving labels/UUIDs, loading helper programs for specific file systems, and providing a friendly interface. Complex mounts (like NFS with Kerberos) often invoke helper binaries that the mount command orchestrates.
The kernel tracks mounts using two primary data structures: struct mount (representing an individual mount instance) and struct vfsmount (a subset exposed to file systems). Understanding these is crucial for kernel developers and advanced debugging.
12345678910111213141516171819202122232425262728293031323334353637383940414243
/* * struct vfsmount - visible to file systems * Contains information file systems might need */struct vfsmount { struct dentry *mnt_root; /* Root of the mounted tree */ struct super_block *mnt_sb; /* Pointer to superblock */ int mnt_flags; /* Mount flags */}; /* * struct mount - internal to VFS * Full mount structure with hierarchy and namespace info */struct mount { struct hlist_node mnt_hash; /* Hash for lookups */ struct mount *mnt_parent; /* Parent mount */ struct dentry *mnt_mountpoint; /* Dentry of mount point */ struct vfsmount mnt; /* The vfsmount subset */ /* Mount hierarchy */ struct list_head mnt_mounts; /* List of child mounts */ struct list_head mnt_child; /* Link to parent's mnt_mounts */ /* Mount namespace */ struct mnt_namespace *mnt_ns; /* Containing namespace */ /* Mount propagation */ struct mount *mnt_master; /* Master (for slave mounts) */ struct list_head mnt_slave_list; /* List of slave mounts */ struct list_head mnt_slave; /* Link in master's slave list */ struct mount *mnt_group_leader; /* Peer group leader */ struct list_head mnt_group; /* Peer group list */ /* Identification */ int mnt_id; /* Unique mount ID */ int mnt_group_id; /* Peer group ID */ int mnt_expiry_mark; /* For auto-unmount */ /* Device and path info */ struct path mnt_ex_mountpoint; /* For umount -l */ const char *mnt_devname; /* Device name */};Key Fields Explained:
| Field | Purpose |
|---|---|
mnt_root | The root dentry of the mounted file system |
mnt_sb | Pointer to the superblock containing all FS metadata |
mnt_mountpoint | The dentry where this FS is mounted (in parent FS) |
mnt_parent | The mount structure of the parent mount |
mnt_mounts | List of child mounts (things mounted under this) |
mnt_ns | The mount namespace this mount belongs to |
mnt_master | For slave mounts, the master mount for propagation |
The Mount Hierarchy:
Mounts form a tree that mirrors (but is separate from) the directory tree. The root mount is at the top; mounts done under it become children. This hierarchy is essential for:
umount -R /mnt unmounts /mnt and everything mounted under it.umount -l can detach a mount even if it has children.VFS maintains a hash table mapping (mount point dentry, parent mount) pairs to mount structures. During path resolution, when VFS encounters a dentry, it checks this hash table to see if anything is mounted there. This O(1) lookup is crucial for performance—every path resolution step potentially crosses a mount point.
When path resolution crosses a mount point, VFS must seamlessly transition from one file system to another. This mount point traversal is automatic and transparent to applications.
How It Works:
During pathname resolution (the namei algorithm we discussed earlier), for each directory component:
Example Path Resolution:
Given mounts:
/ is ext4 on /dev/sda1/home is ext4 on /dev/sdb1/home/alice/cloud is NFS from server:/exportResolving /home/alice/cloud/document.txt:
1234567891011121314151617181920212223242526272829
Resolving: /home/alice/cloud/document.txt Step 1: Start at root ("/") - Current mount: ext4 on /dev/sda1 - Current dentry: root dentry of /dev/sda1 Step 2: Resolve "home" - Lookup "home" in root directory → dentry for "home" - CHECK: Is "home" a mount point? YES! → /dev/sdb1 is mounted here - SWITCH: Replace dentry with root dentry of /dev/sdb1 - Current mount: ext4 on /dev/sdb1 Step 3: Resolve "alice" - Lookup "alice" in /dev/sdb1's root → dentry for "alice" - CHECK: Is "alice" a mount point? NO - Continue with this dentry Step 4: Resolve "cloud" - Lookup "cloud" in alice's directory → dentry for "cloud" - CHECK: Is "cloud" a mount point? YES! → NFS from server:/export is mounted here - SWITCH: Replace dentry with root dentry of NFS mount - Current mount: NFS server:/export Step 5: Resolve "document.txt" - Lookup "document.txt" in NFS root → dentry + inode - CHECK: Is "document.txt" a mount point? NO - DONE: Return final dentry from NFS file systemThe Follow-Mounts Algorithm:
/* Simplified mount traversal logic */
struct dentry *follow_mount(struct dentry *dentry,
struct mount **mnt) {
while (1) {
struct mount *mounted;
/* Look up in mount hash table */
mounted = lookup_mount(dentry, *mnt);
if (!mounted)
break; /* Not a mount point */
/* Switch to mounted file system's root */
*mnt = mounted;
dentry = mounted->mnt.mnt_root;
}
return dentry;
}
Notice the while(1) loop: mounts can stack! You can mount filesystem B on /mnt, then mount filesystem C on the same /mnt, hiding B. Each follow_mount iteration digs to the topmost mount.
Backward Traversal (".." from mount root):
What happens when you cd .. from a mounted filesystem's root? VFS detects you're at the mount root and switches back to the parent mount:
/home/alice$ cd cloud # Cross into NFS mount
/home/alice/cloud$ pwd
/home/alice/cloud
/home/alice/cloud$ cd .. # Cross back to /dev/sdb1
/home/alice$ pwd
/home/alice
This is handled by follow_dotdot() which checks if current dentry equals mount root, and if so, switches to mnt_mountpoint of the parent mount.
Mount traversal is completely invisible to applications. A program opening /home/alice/cloud/document.txt has no idea it crossed three file systems (ext4 → ext4 → NFS). The VFS abstraction is complete.
Bind mounts allow a directory (or file) to appear at multiple locations in the filesystem hierarchy. Unlike regular mounts which attach a new file system, bind mounts make an existing directory tree visible at an additional location.
Creating a Bind Mount:
# Make /home/user/docs appear at /var/www/docs
$ sudo mount --bind /home/user/docs /var/www/docs
# Now both paths access the same files
$ echo "test" > /home/user/docs/file.txt
$ cat /var/www/docs/file.txt
test
# Changes through either path affect the same data
$ rm /var/www/docs/file.txt
$ ls /home/user/docs/file.txt
ls: cannot access '/home/user/docs/file.txt': No such file or directory
Use Cases for Bind Mounts:
/var/www without copying or symlinks that web servers might not follow./dev, /proc) instead of copying.mount --bind -o ro to create a read-only view of a directory./etc/hosts, /etc/resolv.conf).Recursive Bind Mounts:
By default, bind mounts do NOT include submounts. If /home has /home/alice/cloud mounted (NFS), a bind mount of /home to /mnt/home will NOT include the NFS mount.
# Regular bind mount - doesn't include submounts
$ mount --bind /home /mnt/home
$ ls /mnt/home/alice/cloud
# (empty, NFS mount not visible)
# Recursive bind mount - includes all submounts
$ mount --rbind /home /mnt/home
$ ls /mnt/home/alice/cloud
# (NFS contents visible)
How Bind Mounts Work Internally:
A bind mount creates a new struct mount that points to the same dentry and inode as the source. There's no data copying—both the original path and the bind mount path resolve to the exact same kernel objects.
Bind mounts and symbolic links both make content appear at multiple paths, but they differ fundamentally. Symlinks are file system objects resolved during path lookup; applications can detect and read them. Bind mounts are VFS-level constructs; applications cannot distinguish bound paths from original paths. Bind mounts also work across file system boundaries where symlinks might not.
Mount namespaces provide isolated views of the file system mount hierarchy. Different processes can have different mount namespaces, meaning they see different mount structures—even a completely different root file system.
This is the foundation of container isolation in Docker, Kubernetes, and other containerization technologies.
How Mount Namespaces Work:
task_struct->nsproxy->mnt_ns).unshare(CLONE_NEWNS) or clone(CLONE_NEWNS) calls create a new mount namespace.Example: Container Isolation:
123456789101112131415161718192021222324252627
# Host system$ mount | grep home/dev/sdb1 on /home type ext4 (rw,relatime) # Create a new mount namespace (requires root)$ sudo unshare --mount /bin/bash # In new namespace: mount a tmpfs over /home# This change is ONLY visible in this namespace$ mount -t tmpfs tmpfs /home$ mount | grep hometmpfs on /home type tmpfs (rw,relatime) # Create a file in the "new" /home$ echo "namespace test" > /home/test.txt # In original namespace (different terminal):$ cat /home/test.txtcat: /home/test.txt: No such file or directory # The original /home with all user directories is unchanged$ ls /homealice bob charlie # But in the containerized namespace:$ ls /hometest.txtMount Propagation:
By default, mount namespaces are independent. But sometimes you want mount events to propagate between namespaces (e.g., when a new USB drive is mounted, you want all containers to see it). Mount propagation controls this:
| Propagation Type | Behavior |
|---|---|
| MS_SHARED | Mount/unmount events propagate bidirectionally within a peer group |
| MS_PRIVATE | No propagation; namespace has its own isolated copy |
| MS_SLAVE | Receives propagation from master but doesn't propagate back |
| MS_UNBINDABLE | Like private, but also cannot be bind mounted |
Peer Groups:
Shared mounts form peer groups. All mounts in a peer group receive propagated events. When a new filesystem is mounted under a shared mount, all peers see it.
# Make /mnt shared - events propagate
$ mount --make-shared /mnt
# Make /mnt private - isolated
$ mount --make-private /mnt
# Make /mnt a slave of its current peer group
$ mount --make-slave /mnt
Every Docker container runs in its own mount namespace. This is how containers can have their own root filesystem (their image), isolated /tmp, /proc, and /sys, while sharing selected host directories through bind mounts. The 'docker run -v /host/path:/container/path' command creates bind mounts that cross namespace boundaries.
Let's explore common real-world mounting scenarios that systems administrators and developers encounter.
Boot Process Mounts:
At boot, the kernel needs a root filesystem before init can run. This involves several stages:
initramfs (Initial RAM Filesystem):
Switch Root:
init/systemd Mounts:
# /etc/fstab example
UUID=abc123 / ext4 defaults 0 1
UUID=def456 /home ext4 defaults 0 2
tmpfs /tmp tmpfs size=4G,mode=1777 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
server:/exp /mnt/nfs nfs defaults,_netdev 0 0
We've explored the mounting mechanism that assembles diverse file systems into a unified hierarchy. Here are the key points:
What's Next:
We've seen how file systems are mounted. Next, we'll examine the VFS objects themselves—superblock, inode, dentry, and file objects—the data structures that represent mounted filesystems, files, directories, and open file handles.
You now understand file system mounting: how the unified namespace is assembled from independent file systems, how mount points work internally, and how mount namespaces enable container isolation. This knowledge is essential for systems administration and understanding modern containerization.