deleting files but disk space is still full
Matthew Barrera
Dealing with old CentOS 5.6 box, with no lvm setup, my root file system / is full, I have cleared many old log files and application files that I don't need, which was more then 2 -5GB in size, however my system still reports that disk is full.
[root@tornms1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 130G 124G 0 100% /
/dev/sdb1 264G 188M 250G 1% /data
/dev/sda1 99M 24M 71M 26% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
[root@tornms1 ~]# mount
/dev/sda3 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sdb1 on /data type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)Any idea on what I should try to do next? unfortunately rebooting the box is not an option at this time.
49 Answers
Two things might be happening here.
First, your filesystem has reserved some space that only root can write to, so that critical system process don't fall over when normal users run out of disk space. That's why you see 124G of 130G used, but zero available. Perhaps the files you deleted brought the utilisation down to this point, but not below the threshold for normal users.
If this is your situation and you're desperate, you may be able to alter the amount of space reserved for root. To reduce it to 1% (the default is 5%), your command would be
# tune2fs -m 1 /dev/sda3Second, the operating system won't release disk space for deleted files which are still open. If you've deleted (say) one of Apache's log files, you'll need to restart Apache in order to free the space.
2If you delete a file that is being used by a process, you can no longer view the file by ls. The process is still writing to that file until you stop the process.
To view those deleted files, simply runlsof|grep delete
2 others ways to get the disk is full issue:
1) hidden under a mount point: linux will show a full disk with files "hidden" under a mount point. If you have data written to the drive and mount another filesystem over it, linux correctly notes the disk usage even though you can't see the files under the mount point. If you have nfs mounts, try umounting them and looking to see if anything was accidentally written in those directories before the mount.
2) corrupted files: I see this occasionally on windows to linux file transfer via SMB. One file fails to close the file descriptor and you wind up with a 4GB file of trash.
This can be more tedious to fix, because you need to find the subdirectory that the file is in, but it's easy to fix because the file itself is readily removable. I use the du command and do a listing of the root subdirs to find out where the file space is being used.
cd /
du -sh ./* The number of top level directories is usually limited, so I set the human readable flag -h to see which subdirectory is the space hog.
Then you cd into the problem child and repeat the process for all items in it. To make this easy to spot the large items, we change the du slightly and couple it with a sort.
cd /<suspiciously large dir>
du -s ./* | sort -nwhich produces a smallest to largest output by byte size for all files and directories
4 ./bin
462220 ./Documents
578899 ./Downloads
5788998769 ./Grocery ListOnce you spot the oversized file, you can usually just delete it.
1You could find out which files are open with lsof. It can produce a lot of output, so I limited in example below to lines ending with log:
# lsof | grep log$
rsyslogd 2109 syslog 0u unix 0xffff88022fa230c0 0t0 8894 /dev/log
rsyslogd 2109 syslog 1w REG 252,6 62393 26 /var/log/syslog
rsyslogd 2109 syslog 2w REG 252,6 113725 122 /var/log/auth.log
rsyslogd 2109 syslog 3u unix 0xffff88022fa23740 0t0 8921 /var/spool/postfix/dev/log
rsyslogd 2109 syslog 5w REG 252,6 65624 106 /var/log/mail.log
/usr/sbin 2129 root 2w REG 252,6 93602 38 /var/log/munin/munin-node.log
/usr/sbin 2129 root 4w REG 252,6 93602 38 /var/log/munin/munin-node.log
... 0 If some files are deleted but still used by some process then it's space will not be released. In this case either restart a process that is using the file or nullify the file. Its always good practice to null such files instead of deleting them. To find deleted files but are still in use by some process
#lsof +L1it will give process id and file descriptor. To null deleted file by file descriptor
#echo "" > /proc/$pid/fd/$fd 1 Type the command
#lsof +L1Which will show the list of files that holding memory with deleted quote.
Note the pid ( Process id ) of the file
Kill the process
#kill <pid>The memory will be released by the process
Check it by command
#df -h 0 Actual problem observed in the wild:
Make sure you're deleting the actual files and not symlinks to the files. This can be the case for log files especially.
In most of the cases if we delete any log files it no longer views the file by ls. The process is still writing to that file until you stop the process.
Additionally to what have been explained, the issue could be that there is a another mount point of the deleted file directory on another attached disk device on the same server. Check the current mounts and the fstab entries.