Practical limit on the number of btrfs snapshots?

I am considering using btrfs on my data drive so that I can use snapper, or something like snapper, to take time based snapshots. I believe this will let me browse old versions of my data. This would be in addition to my current off site backup since a drive failure would wipe out the data and the snapshots.

From my understanding btrfs snapshots do not take up much space (meta data and the blocks that have changed, plus maybe some overhead), so space doesn’t seem to be a constraint.

If I have a million snapshots (e.g., a snapshot every minute for two years) would that cause havoc, assuming I have enough disk space for the data, the changed data, and the meta data?

If there is a practical limit on the number of snapshots, does it depend on the number of files and/or size of files?

Asked By: StrongBad

||

As someone who is using a btrfs filesystem with Arch Linux for almost 2 years now I can safely say that there does not seem to be a practical limit on the number of snapshots that can be easily reached. There are some caveats though. btrfs filesystem can lead to fragmentation. It is therefore advisable to use the online defragmentation feature built into btrfs. Furthermore, one can make good use of btrfs‘s compression feature. These measures should take care of most performance issues that could sensibly arise on a reasonably decent computer from creating a lot of snapshots.

As you might know btrfs treats subvolumes as filesystems and hence the number of snapshots is indeed limited: namely by the size of files. According to the btrfs wiki the maximum filesize that can be reached is 2^64 byte == 16 EiB[1].

Aside from these limitations there can potentially always be problems when you run out of space without you immediately recognizing because checking for free space on btrfs filesystems can sometimes be tricky, i.e. without being able to differentiate between different methods of measuring free space on a btrfs filesystem one can easily use track of what amount of space is actually left. One possible way to prevent this scenario is the use of quota. This ensures that users (or the user if it is only one) can only use a certain amount of space. This concept is discussed very ably here and also here.

Last but not least a warning: I am no expert on btrfs filesystems and only read about these things when I had the same question a while ago. Furthermore, there is always the problem that btrfs is a “fast moving target” (Nice wording being stolen from an Arch Linux wiki page I think.) so things might change.

Answered By: lord.garbage

You can have a combined total of 264 snapshots and subvolumes.

The btrfs design wiki page says (empahsis mine):

Subvolumes are basically a named btree that holds files and directories. They have inodes inside the tree of tree roots and can have non-root owners and groups. Subvolumes can be given a quota of blocks, and once this quota is reached no new writes are allowed. All of the blocks and file extents inside of subvolumes are reference counted to allow snapshotting. Up to 264 subvolumes may be created on the FS.

Snapshots are identical to subvolumes, but their root block is initially shared with another subvolume. When the snapshot is taken, the reference count on the root block is increased, and the copy on write transaction system ensures changes made in either the snapshot or the source subvolume are private to that root. Snapshots are writable, and they can be snapshotted again any number of times. If read only snapshots are desired, their block quota is set to one at creation time.

Answered By: Tom Hale

While technically there is no limit on the number of snapshots, I asked on the BTRFS mailing list:

The (practical) answer depends to some extent on how you use btrfs.

Btrfs does have scaling issues due to too many snapshots (or actually the
reflinks snapshots use, dedup using reflinks can trigger the same scaling
issues), and single to low double-digits of snapshots per snapshotted
subvolume remains the strong recommendation for that reason.

But the scaling issues primarily affect btrfs maintenance commands
themselves, balance, check, subvolume delete. While millions of
snapshots will make balance for example effectively unworkable (it’ll
sort of work but could take months), normal filesystem operations like
reading and saving files doesn’t tend to be affected, except to the
extent that fragmentation becomes an issue (tho cow filesystems such as
btrfs are noted for fragmentation, unless steps like defrag are taken to
reduce it).

It appears that using snapshots as an archival backup similar to time machine/snapper is not a good idea.

Answered By: StrongBad
Categories: Answers Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.