Solving disk space issues at EC2 AWS Linux instances

8:52 PM August 20 2020 devops

More often than not (sadly) I struggle with the next error message at my Amazon AWS instances:\
unable to create '/something/something.tmp': No space left on device

So, disk space issue... probably big log files that need to be wiped or something like that? not always that easy.

How to diagnose the issue

When I ran df -H the output says that I still have some space left, so what is it?

Well, it may be the inodes allocation. Run df -i to check them out.
If you find that you are at 100% or close to it, you need to get rid of them.

How to locate the files to remove

Where are they at? You can try to run this script (that doesn't need to create a tmp file for it, since we don't have space left for that)

cd /
sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n

Common cause: linux-headers inodes

But, you can save some time if you first try for the most common source of this issue... old linux-headers

They are located at /usr/src/ folder, just run a ll and delete some to recover some inodes space (you won't be able to purge them using apt-get if you are at 100% disk usage). Don't delete them all they are used by Linux, just get rid of some old versions that may be lingering there.

So, for example, run:

sudo rm -rf  linux-headers-4.4.0-130/

Then you can try to purge some of them using sudo apt-get -f autoremove (and then manually remove old ones that weren't removed that way)

TLDR;

Check if the issue are the inodes by running df -i

If it's at 100% or close to it, try removing old linux-headers to recover some space. Don't go crazy on this step, try removing only 1 of the oldest images and then run sudo apt-get -f autoremove

They are located at /usr/src/ folder (you can remove them with rm -rf)