Why BTRFS ?
I have recently tested Btrfs as the file system for my /home partition (which was previously on ext4).
I have been impressed by what this file system enables to do, but also came to the conclusion that it is not for me.
As a quick reminder, the goal of this file system is to bring to Linux a fully featured file system similar to zfs. Some of these features promise a lot of awesomeness: snapshots, native RAID, automatic defragmentation and repairs, etc.
Wouldn’t it be cool to have such a file system for your data? Among them, snapshotting really is a killer feature. See it as a global git for all your data. You can track any file history, make a diff comparison on them and revert back to a chosen version, anytime and on-line.
Btrfs has been under development for a while and it is still undergoing. However, the first stable version has finally been released last year.
Many people warn that it is not production ready yet. It seems obvious for critical production systems, under heavy load or using the most advanced features (e.g. RAID). But what about a simple /home, mainly using snapshots (which have been around for a while)?
You will see that there are still some issues with virtualization.
Disclaimer 1: this is in no way a review or a benchmark of Btrfs. Consider it simply as some feedback for my specific use case.
This chapter is a summary of procedures found in various resources, along with my feedback.
Disclaimer 2: First of all, make several backup of your entire /home. And make sure that it is operational and complete. Anyway, beware that there is obviously some inherent risk for your data in manipulating your home partition. So, do not come back to insult me if you lose any data.
First, note that there is a conversion utility btrfs-convert, to convert an existing ext4 partition to btrfs. While this sounds cool, it did not work well with my partition, leading to many corrupted inodes.
So my advice is to just make a good backup of your home:
% rsync -av /home /your/backup/
Then, log out and format the partition as root:
# mount | grep home
/dev/mapper/system-home on /home type ext4 (rw,noatime,data=ordered)
# umount /home
# mkfs.btrfs /dev/mapper/system-home
Change the file system and its options in /etc/fstab. For example:
/dev/system/home /home ext4 defaults,noatime 1 1
should become (also note the change on the last digit):
/dev/system/home /home btrfs defaults,noatime,ssd,space_cache,compress=lzo 1 0
Re-mount /home and you are done!
The main purpose for me to test Btrfs was the snapshot feature, in the hope to keep a version history of each file and avoid accidental deletions and changes.
Of course, one could use the Btrfs commands and implement snapshots manually. But why reinventing the wheel?
The guys behind snapper already made a service especially for that. It is basically a wrapper over Btrfs that will make automatic snapshots in the background, based on your frequency settings, and ease their handling.
Once installed, it can be enabled with the following command:
# snapper -c home create-config /home
It has the effect of creating a configuration file, where you can adjust the number of snapshots you want to keep per day, week, month, etc. Of course, don’t keep too much data as it will waste free space, especially if you happen to move large amounts of data. Hourly and daily snapshots are OK, as they would be cleaned up quickly. But monthly or yearly snapshots would consume a lot of space and would be pretty useless for a /home.
Here is what I used, without consuming much more than 10 GB:
# subvolume to snapshot
# filesystem type
# users and groups allowed to work with config
# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# start comparing pre- and post-snapshot in background after creating
# run daily number cleanup
# limit for number cleanup
# create hourly snapshots
# cleanup hourly snapshots after some time
# limits for timeline cleanup
# cleanup empty pre-post-pairs
# limits for empty pre-post-pair cleanup
Now, let’s play a little. In the following sequence, we create a file containing “Hello World!”, we then create a manual snapshot, change the file and display the differences:
# vim test.txt
# snapper -c home create --description "before test"
# vim test.txt
# sudo snapper -c home list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
single | 0 | | | root | | current |
single | 1 | | Sun Mar 13 19:44:21 2016 | root | | before test |
single | 2 | | Sun Mar 13 19:45:12 2016 | root | | created test |
single | 3 | | Sun Mar 13 19:52:39 2016 | root | | update test |
single | 4 | | Sun Mar 13 20:00:01 2016 | root | timeline | timeline |
single | 5 | | Sun Mar 13 21:00:01 2016 | root | timeline | timeline |
single | 6 | | Sun Mar 13 22:00:01 2016 | root | timeline | timeline |
# snapper -c home status 1..0
--- "/home/.snapshots/2/snapshot/phocean/test.txt" 2016-03-13 19:44:53.370641373 +0100
+++ "/home/phocean/test.txt" 2016-03-13 19:45:27.226586459 +0100
@@ -1 +1,2 @@
@@ -0,0 +1,2 @@
Neat, isn’t it? Now, what if we decide to restore the file to this snapshot:
snapper -c home undochange 1..0 /home/phocean/test.txt
Note that all these operations can be done against the entire partition (no argument needed), a folder or a file.
Regarding regular files, I had no issue at all. After a week of intensive use, I already the occasion to enjoy the benefits of having snapshots and being able to restore a file.
On the performance side, even though I haven’t done any benchmark, it is a least as fast as ext4. It is said that under some conditions, compression can be a big read rate boost.
On the compression side, on my partition of 400 GB, it allowed me to reclaim around 20 GB of space. Of course, the gain you can expect is totally related to the sorts of files you have (you won’t gain much on files that are already compressed or encrypted).
As warned on the official wiki itself, you should not use Btrfs as-is with database or virtualization solutions.
Dixit the official wiki:
Files with a lot of random writes can become heavily fragmented (10000+ extents) causing trashing on HDDs and excessive multi-second spikes of CPU load on systems with an SSD or large amount a RAM.
Indeed, I quickly experienced some issues with Virtualbox. Under heavy I/O operations, and having several machines running at a time, I had the guest file systems corrupted more than once. And so badly that the guest machine was unrecoverable (even with snapshots). Sometimes I got plenty of ext4 errors, or sometimes it just froze, while copying a bunch of file or doing an apt-get upgrade...
The workarounds did not make it for me:
- I even did not test disabling CoW for the whole partition. It kills one of the main advantages of using Btrfs.
- I tried disabling CoW for all the VM folder. While the corruption frequency decreased, it still occurred after a while.
So, I would simply adivse of not putting any virtual machine on the Btrfs partitions, until this thing definitely get sorted. I use virtual machines intensively at work and need them to be reliable.
Btrfs is awesome and pretty stable at this time, unless you need to host virtual machines. You could still have a dedicate ext4 partition for you VMs, and enjoy Btrfs for the rest of your home.
To be honest, I did not bother (not wanting to manage several partitions), and switched back to ext4 for all, in the expectation of better days. I am not sure if this should be addressed on the Btrfs, or the Virtualbox side (or both).