I have a bunch of external and internal HDDs that I use on a Linux system. I only have Linux systems, so using a Linux file-system would only make sense, right? However I'm currently using NTFS everywhere, because it gives me the most usable space out of HDDs.

I would like to switch to Linux file-systems now though, mostly because of permissions and compability (e.g. I can't get my LUKS encrypted NTFS partition to resize under Linux, keeps telling me to chkdsk under Windows).

However when I formatted those HDDs I tried out a bunch of different filesystems and every Linux filesystem, even ext2 which as far as I know has no journaling, used a lot of space for itself. I don't recall exact values, but it was over 100GB that NTFS got me more on a 2TB HDD, which is a lot.

So my question is: Is there a way to make ext-filesystems use less space for themselves? Or is there another filesystem (I've tried ext2, ext3, ext4, NTFS and vfat - None of them came even close to the usable space NTFS offered me) with perfect Linux support and great usable space?

I'd love to hear about how and why filesystems (especially ext2 which has no journaling) use that much more space than NTFS and I don't know where else to ask. I'd mostly prefer a way to use ext4 without journaling and anything else that uses up this much space, if that's possible.

  • 1
    Have you seen this thread? – JakeGould Aug 5 at 18:42
  • 4
    I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB. – confetti Aug 5 at 18:48
  • Fair enough. Decent question and the answer is enlightening too. – JakeGould Aug 5 at 22:33
  • 3
    how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question – eMBee Aug 6 at 4:04
  • 2
    Keep in mind that journaling on file systems such as ext3 and ext4 is a good thing. It's pretty easy to lose power to an external drive or unplug it by accident if it's a USB, When that happens, it's often no big deal because it heals itself using the journal when it starts back up. Without that safety net, things would be much worse. It's not just a case of more is better. – Joe Aug 6 at 20:18
up vote 93 down vote accepted

By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.

These reserved blocks prevent programs not running as root from filling your disk. Whether these considerations justify the loss of capacity depends on what the filesystem is used for.

The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.

The reservation can be changed using the -m option of the tune2fs command:

tune2fs -m 0 /dev/sda1

This will set the reserved blocks percentage to 0% (0 blocks).

To get the current value (among others), use the command :

tune2fs -l <device> 
  • 10
    This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running df as non-root vs. root shows no difference. – confetti Aug 5 at 18:50
  • 12
    @confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course). – Ignacio Vazquez-Abrams Aug 5 at 18:57
  • 1
    tune2fs -l <device> should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability. – harrymc Aug 5 at 19:04
  • 7
    XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem. – Michael Hampton Aug 5 at 20:41
  • 5
    Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why. – confetti Aug 6 at 4:34

if the data you intend to store on it is compressible, btrfs mounted with compress=zstd (or compress-force=zstd) would probbaly use significantly less disk space than ext*

  • this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.
  • 1
    Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...) – confetti Aug 6 at 8:04
  • @confetti like this? patchwork.kernel.org/patch/9817875 – hanshenrik Aug 6 at 8:06
  • I really like this idea, but more information about how this would impact speed and performance and such would be nice. – confetti Aug 6 at 19:53
  • 2
    @confetti, since you're using a hard drive, it'll probably improve performance. CPUs are so much faster than hard drives that the slow part of disk access is getting the data on and off the disk; the time spent compressing or decompressing won't be noticeable. – Mark Aug 7 at 1:55
  • 2
    On the other hand, most types of large files nowadays (e.g. images, audio, video, even most rich text document formats) tend to be already compressed, and generally do not benefit from additional compression. At least not of the simple general-purpose kind performed at the filesystem level. – Ilmari Karonen Aug 7 at 10:33

Another point which has not been talked about yet is the number of inodes you reserve on your file system.

Per default, mkfs creates a number of inodes which should make it possible to put a whole lot of very small files into your file system. If you know that the files will be very big and you will only put a small number of files on the FS, you can reduce the number of inodes.

Take care! This number (resp. the ratio between space and number of inodes) can only be set at the file system creation time. Even when extending the FS, the ratio remains the same.

  • Alternatively, if you know you're going to store lots of really tiny files, you can increase the number of inodes and decrease the block size so that you don't waste as much space. (Each file must consume a minimum of one block, even if it's 1 byte. use ls -ls to compare the size vs what's used on the disk.) – Perkins Aug 7 at 17:59
  • @Perkins You are right, but I think this holds only for VERY tiny files: the default block size is 4kiB (IIRC), the minimum one is 1 kiB. So not so very much to win, except your disk is really full of those files. But nevertheless, I might dive into this tomorrow. – glglgl Aug 7 at 20:49
  • 2
    or alternatively use btrfs, which creates inodes as needed. whereas ext4's inodes are allocated at filesystem creation time and cannot be resized after creation, with a hard limit of 4 billion, btrfs's inodes are dynamically created as needed, and the hard limit is 2^64, around 18.4 quintillion, which is around 4.6 billion times higher than the hard limit of a maxed ext4 :p – hanshenrik Aug 7 at 22:53
  • Reminds me of setting up a usenet spool. ext4 will store tiny (limit is somewhere from 60-160 bytes depending on lots of things) within the inode itself. – mr.spuratic Aug 8 at 8:43

Your Answer

 

By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Not the answer you're looking for? Browse other questions tagged or ask your own question.