Synology Insufficient Space to Upgrade v2

System partition cleanup…

We’ve had some issues with Synology updates failing, migrations failing and just general issues when trying to do simple things.

A lot of this stems from the fact that Synology limit the system partition down to 2GB.
Here’s the official page for this issue.

This can be very difficult to solve, and we’ve had several attempts at writing up solutions ourselves, check here and here. But we’re not your Mum, if these bits of advice make your truck break down, your dog leave you or inspire you to write country music, we feel sorry for you, but we’re not responsible.

Try These First

These tips come from the previous posts linked above, go there if you want more background and info.

Media Indexing.
SSH into your box, you might find the /var/spool directory full of index files waiting to finish. these can usually be deleted without any bad effects (they should just start again if needed) see above remarks about responsibility…

Synology Drive
Stuff like Synology Drive is amazing when it works, but if you played with it and left the broken bits strewn around your file system, do a bit of cleanup to help your future self. Sometimes it won’t release all the space you asked nicely about, and you may have to delete the app from Package Center.

Disk Scrubbing
In Storage Manager, you can start a Disk Scrubbing operation by clicking on the Storage Pool you want to target. This works by cleaning up stuff you’ve deleted, YMMV if it gives up enough space for you…

What’s Normal?

A ‘Normal’ DSM install might include directories like this-

bin
config
dev
etc
etc.defaults
initrd
lib
lib32
lib64
lost+found
mnt
opt
proc
root
run
sbin
sys
tmp
usr
var
var.defaults
volume1
volume2

But your system partition doesn’t use the /volume directory, and /proc contains a bunch of process files. So to ignore these and get a quick scan of our Synology, SSH into the box, get root, move to root (top) directory and issue this command-

du -h -d1 -X <(echo "volume*" && echo "proc") | sort -h

This means ‘find the size of these directories, in human readable form, to a single depth level, exclude anything with the label ‘volume’, exclude the ‘proc’ directory, and send the results to be sorted in human readable form’
You will get a result like this-

0 ./config
0 ./sys
4.0K ./initrd
4.0K ./lost+found
4.0K ./mnt
4.0K ./.system_info
16K ./opt
28K ./root
44K ./.old_patch_info
2.5M ./etc.defaults
3.0M ./tmp
3.8M ./.log.junior
9.3M ./var.defaults
22M ./etc
23M ./run
35M ./.syno
315M ./var
1.2G ./usr
25G ./dev
26G .

Remember we need to get this under 2GB. Synology recommend you don’t install any 3rd party software for exactly this reason- anything can cause you to go over the 2GB system partition limit.

Removing 3rd party software


Look for stuff you installed first!
So let’s look only at the 3 biggest entries- /var /usr and /dev but don’t forget about /etc because a lot of 3rd party packages may get installed here.
After navigating around for a bit, we were able to delete the following files and folders-

/etc/netclient
/etc/netmaker
/usr/local/bin/netclient

Netclient and Netmaker are a Wireguard client we had been experimenting with

/etc/so-launcher
/var/so-launcher

Security Onion agent. This will need to be re-installed and relocated

Well looky here-

/usr/local/bin/nebula

Nebula is another Wireguard like secure mesh client we had been playing with

Synology Software

/upd@te

This seems to contain the update we were trying to run, you can delete it but this most likely isn’t the issue

Virtual Machines- Shutdown to reclaim space

/dev/virtualization/libvirt/qemu

There was 16GB in this folder, with files named like ‘syno.qemu_back_mem.2Xc31f26-X036-X9a0-b2Xe-89eXXXXXXXXX’
Turns out it is is a RAM cache for QEMU. if you stop your virtual machines this will go to zero. The OS knows about this so shutting VMs down before an upgrade probably won’t help, but it is worth mentioning because otherwise you might be chasing your tail looking for this…

Removing all of these things got us down to a reported 1.6GB of used space- is it enough?

Well, NO!

Removing old Logs

removing some logs- by getting rid of logs with a .tar or .xz suffix, you are getting rid of old logs- these are compressed by the OS to save space.

rm -rf logs.tar.*

rm -rf *.xz

Be very careful with these commands!
Should be safe if you execute inside the /var/log/ directory, but think of the damage you could do elsewhere!
And that seemed to finally fix this issue for us- 138MB of logs, including 45MB of ‘SYNODISKLATENCYDB’ – this needs further research. However- we did need to do the other actions as you can see this wouldn’t have fixed the space issue all on its own.


Got any more tips to look for space issues on Synology?
Let us know!

Recent posts