16
submitted 11 hours ago by needanke@feddit.org to c/selfhosted@lemmy.world

I recently moved my files to a new zfs-pool and used that chance to properly configure my datasets.

This led me to discovering zfs-deduplication.

As most of my storage is used by my jellyfin library (~7-8Tb), which is mostly uncompressed bluray rips I thought I might be able to save some storage using deduplication in addition to compression.

Has anyone here used that for similar files before? What was your experience with it?

I am not too worried about performance. The dataset in question is rarely changed. Basically only when I add more media every couple of months. I also have overshot my cpu-target when originally configuring my server so there is a lot of headroom there. I have 32Gb of ram which is not really fully utilized either (but I also would not mind upgrading to 64 too much).

My main concern is that I am unsure it is useful. I suspect just because of the amount of data and similarity in type there would statistically be a lot of block-level duplication but I could not find any real world data or experiences on that.

all 11 comments
sorted by: hot top controversial new old
[-] Moonrise2473@feddit.it 3 points 1 hour ago

Uncompressed Blu-ray rips are almost the same size when compressed with lossless compression. The binary content of h264 files are almost random bits so deduplication is almost a waste of CPU time. Maybe you can save space from the useless repeated media like trailers and other ads in the bluray isos

[-] pimeys@lemmy.nauk.io 14 points 10 hours ago

You should maybe read about the use cases for deduplication before using it. Here's one recent article:

https://despairlabs.com/blog/posts/2024-10-27-openzfs-dedup-is-good-dont-use-it/

If you mostly store legit Blu-ray rips, the answer is probably no, you should not use zfs deduplication.

[-] undefined@lemmy.hogru.ch 1 points 2 hours ago

I’m in almost the exact same situation as OP, 8 TB of raw Blu-ray dumps except I’m on XFS. I ran duperemove and freed ~200 GB.

[-] friend_of_satan@lemmy.world 2 points 10 hours ago* (last edited 8 hours ago)

I was also going to link this. I started using zfs 10-ish years ago and used dedup when it came out, and it was really not worth it except for archiving a bunch of stuff I knew had gigs of duplicate data. Performance was so poor.

[-] BCsven@lemmy.ca 2 points 7 hours ago

Something like fslint might be what you want. Scan folders, lists duplicates, you set how you want to deal with them. Its more manual, but I think it is what you are actually trying to achieve.

[-] greyfox@lemmy.world 2 points 8 hours ago

Like most have said it is best to stay away from ZFS deduplication. Especially if your data set is media the chances of an entire ZFS block being the same as any other is small unless you somehow have multiple copies of the same content.

Imagine two mp3s with the exact same music content but with slightly different artist metadata. A single bit longer or shorter at the beginning of the file and even if the file spans multiple blocks ZFS won't be able to duplicate a single byte. A single bit offsetting the rest of the file just a little is enough to throw off the block checksums across every block in the file.

To contrast with ZFS, enterprise backup/NAS appliances with deduplication usually do a lot more than block level checks. They usually check for data with sliding window sizes/offsets to find more duplicate data.

There are still some use cases where ZFS can help. Like if you were doing multiple full backups of VMs. A VM image has a fixed size so the offset issue above isn't an issue, but if beware that enabling deduplication for even a single ZFS filesystem affects the entire pool, even ZFS filesystems that have deduplication disabed. The deduplication table is global for the pool and once you have turned it on you really can't get rid of it. If you get into a situation where you don't have enough memory to keep the deduplication table in memory ZFS will grind to a halt and the only way to completely remove deduplication is to copy all of your data to a new ZFS pool.

If you think this feature would still be useful for you, you might want to wait for 2.3 to release (which isn't too far off) for the new fast dedup feature which fixes or at least prevents a lot of the major issues with ZFS dedup

More info on the fast dedup feature here https://github.com/openzfs/zfs/discussions/15896

[-] Decipher0771@lemmy.ca 4 points 10 hours ago

I think the universal consensus is that outside of a very specific use case: multiple VDI desktops that share the same image, ZFS dedupe is completely useless at best and will destroy your dataset at worst by causing to be unmountable on any system that has less RAM than needed. In every other use case, the savings are not worth the trouble.

Even in the VDI use case, unless you have MANY copies of said disk images(like 5+ copies of each), it’s still not worth the increase in system resources needed to use ZFS dedupe.

It’s one of those “oooh shiny” nice features that everyone wants to use, but will regret it nearly every time.

[-] nottelling@lemmy.world 4 points 10 hours ago

ZFS dedup is memory constrained, and the memory use scales with the block hashes.

If performance isn't a concern, you're better off compressing your media. You'll get similar storage efficiency with less crash consistency risk.

[-] IsoKiero@sopuli.xyz 3 points 10 hours ago

ZFS in general is pretty memory hungry. I set up my proxmox sever with zfs pools a while ago and now I kind of regret it. ZFS in itself is very nice and has a ton of useful features, but I just don't have the hardware nor the usage pattern to benefit from it that much on my server. I'd rather have that thing running on LVM and/or software raid to have more usable memory for my VM's. And that's one of the projects I've been planning for the server, replace zfs pools with something which suits my usage patterns better, but that's a whole another story and requires some spare money and some spare time, which I don't really either at hand right now.

[-] AbouBenAdhem@lemmy.world 2 points 10 hours ago* (last edited 10 hours ago)

I haven’t tried it because I’ve read a lot of negative discussions of it—and because (by my understanding) the only reasonable use case would be if there were a large number of users and each user is likely to have copies of the same files (so you can't just manually de-dupe).

this post was submitted on 09 Jan 2025
16 points (90.0% liked)

Selfhosted

40956 readers
551 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS