Block device doesn't actually mount and no errors are reported
Emily Wong
I'm debugging an issue with mounting a partition (from an EBS volume) on an AWS EC2 instance.
The device shows up as /dev/nvme1n1p1:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 67.6M 1 loop /snap/lxd/20326
loop1 7:1 0 55.4M 1 loop /snap/core18/2066
loop2 7:2 0 33.3M 1 loop /snap/amazon-ssm-agent/3552
loop3 7:3 0 32.3M 1 loop /snap/snapd/12159
nvme0n1 259:0 0 8G 0 disk
└─nvme0n1p1 259:1 0 8G 0 part /
nvme1n1 259:2 0 8G 0 disk
└─nvme1n1p1 259:3 0 8G 0 part I can try mounting it:
sudo mount /dev/nvme1n1p1 /home/ubuntu/mystuff -v
and it will report:
mount: /dev/nvme1n1p1 mounted on /home/ubuntu/mystuff
But it's not actually mounted! I can't see any files, and the lsblk output doesn't change from above (i.e. no mountpoint).
The kernel log only shows:
[ 2158.436056] BTRFS info (device nvme1n1p1): disk space caching is enabled
[ 2158.436057] BTRFS info (device nvme1n1p1): has skinny extents
[ 2158.446309] BTRFS info (device nvme1n1p1): enabling ssd optimizationsHow do I debug this? Where can I get more information or insight into what's going on?
1 Answer
You probably have (or had) an /etc/fstab entry for the same mountpoint but a different device. Due to a badly implemented feature in the systemd manager (which was supposed to remove mounts in case the device disappears), it automatically removes mounts whenever the device doesn't exist – and sometimes prioritizes the stale information from /etc/fstab over the "live" mount information. This sometimes results in new mounts being immediately unmounted.
Check
journalctl -n 100to see if this is the problem.Remove the entry from /etc/fstab, then run
systemctl daemon-reloadbefore mounting the device again.Try to mount the device at a different location.