29
submitted 1 year ago* (last edited 1 year ago) by Kalcifer@lemmy.world to c/linux@lemmy.ml

I'm trying to find a good method of making periodic, incremental backups. I assume that the most minimal approach would be to have a Cronjob run rsync periodically, but I'm curious what other solutions may exist.

I'm interested in both command-line, and GUI solutions.

top 39 comments
sorted by: hot top controversial new old
[-] fckreddit@lemmy.ml 7 points 1 year ago

I don't. I lose my data like all the cool (read: fool) kids.

[-] xavier666@lemm.ee 3 points 1 year ago

I too rawdog linux like a chad

[-] inex@feddit.de 4 points 1 year ago

Timeshift is a great tool for creating incremental backups. Basically it's a frontend for rsync and it works great. If needed you can also use it in CLI

[-] anarchyreloaded@lemmy.world 2 points 1 year ago

I use timeshift. It really is the best. For servers I go with restic.

[-] lord_ryvan@ttrpg.network 1 points 1 year ago* (last edited 1 year ago)

I use timeshift because it was pre-installed. But I can vouch for it; it works really well, and let's you choose and tweak every single thing in a legible user interface!

[-] NoXPhasma@lemmy.world 2 points 1 year ago

I use Back In Time to backup my important data on an external drive. And for snapshots I use timeshift.

[-] garam@lemmy.my.id 0 points 1 year ago

Back In times

Isn't timeshift have same purpose, or it's just matter of preference?

[-] NoXPhasma@lemmy.world 1 points 1 year ago

Yes, it is the same purpose, kinda. But timeshift runs as a cron and allows for an easy rollback, while I use BIT for manual backups.

[-] knfrmity@lemmygrad.ml 2 points 1 year ago

I have scripts scheduled to run rsync on local machines, which save incremental backups to my NAS. The NAS in turn is incrementally backed up to a remote server with Borg.

Not all of my machines are on all the time so I also built in a routine which checks how old the last backup is, and only makes a new one if the previous backup is older than a set interval.

I also save a lot of my config files to a local git repo, the database of which is regularly dumped and backed up in the same way as above.

[-] HarriPotero@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

I rotate between a few computers. Everything is synced between them with syncthing and they all have automatic btrfs snapshots. So I have several physical points to roll back from.

For a worst case scenario everything is also synced offsite weekly to a pCloud share. I have a little script that mounts it with pcloudfs, encfs and then rsyncs any updates.

[-] HumanPerson@sh.itjust.works 2 points 1 year ago

When I do something really dumb I typically just use dd to create an iso. I should probably find something better.

[-] CrabAndBroom@lemmy.ml 2 points 1 year ago

I use Borg backup with Vorta for a GUI. Hasn't let me down yet.

[-] akash_rawal@lemmy.world 2 points 1 year ago

I use rsync+btrfs snapshot solution.

  1. Use rsync to incrementally collect all data into a btrfs subvolume
  2. Deduplicate using duperemove
  3. Create a read-only snapshot of the subvolume

I don't have a backup server, just an external drive that I only connect during backup.

Deduplication is mediocre, I am still looking for snapshot aware duperemove replacement.

[-] JoMiran@lemmy.ml 2 points 1 year ago

I'm not trying to start a flame war, but I'm genuinely curious. Why do people like btrfs over zfs? Btrfs seems very much so "not ready for prime time".

[-] EddyBot@feddit.de 5 points 1 year ago

btrfs is included in the linux kernel, zfs is not on most distros
the tiny chance that an externel kernel module borking with a kernel upgrade happens sometimes and is probably scary enough for a lot of people

[-] JoMiran@lemmy.ml 1 points 1 year ago

Fair enough

[-] akash_rawal@lemmy.world 5 points 1 year ago

Features necessary for most btrfs use cases are all stable, plus btrfs is readily available in Linux kernel whereas for zfs you need additional kernel module. The availability advantage of btrfs is a big plus in case of a disaster. i.e. no additional work is required to recover your files.

(All the above only applies if your primary OS is Linux, if you use Solaris then zfs might be better.)

[-] Rockslide0482@discuss.tchncs.de 2 points 1 year ago

I've only ever run ZFS on a proxmox/server system but doesn't it have a not insignificant amount of resources required to run it? BTRFS is not flawless, but it does have a pretty good feature set.

[-] JoMiran@lemmy.ml 2 points 1 year ago

At the core it has always been rsync and Cron. Sure I add a NAS and things like rclone+cryptomator to have extra copies of synchronized data (mostly documents and media files) spread around, but it's always rsync+Cron at the core.

[-] PhilBro@lemmy.world 1 points 1 year ago

I run Openmediavault and I backup using BorgBackup. Super easy to setup, use, and modify

[-] useless@lemmy.ml 1 points 1 year ago* (last edited 1 year ago)

I use btrbk to send btrfs snapshots to a local NAS. Consistent backups with no downtime. The only annoyance (for me at least) is that both send and receive ends must use the same SELinux policy or labels won't match.

[-] happyhippo@feddit.it 1 points 1 year ago

Vorta + borgbase

The yearly subscription is cheap and fits my storage needs by quite some margin. Gives me peace of mind to have an off-site back up.

I also store my documents on Google Drive.

[-] SethranKada@lemmy.ca 1 points 1 year ago

I use Pika backup, which uses borg backup under the hood. It's pretty good, with amazing documentation. Main issue I have with it is its really finicky and is kind of a pain to setup, even if it "just works" after that.

[-] nikodunk@lemmy.ml 2 points 1 year ago

Can you restore from it? That’s the part I’ve always struggled with?

[-] SethranKada@lemmy.ca 1 points 1 year ago

The way pika backup handles it, it loads the backup as a folder you can browse. I've used it a few times when hopping distros to copy and paste stuff from my home folder. Not very elegant, but it works and is very intuitive, even if I wish I could just hit a button and reset everything to the snapshot.

[-] Jajcus@kbin.social 1 points 1 year ago

Kopia or Restic. Both do incremental, deduplicated backups and support many storage services.

Kopia provides UI for end user and has integrated scheduling. Restic is a powerfull cli tool thatlyou build your backup system on, but usually one does not need more than a cron job for that. I use a set of custom systems jobs and generators for my restic backups.

Keep in mind, than backups on local, constantly connected storage is hardly a backup. When the machine fails hard, backups are lost ,together with the original backup. So timeshift alone is not really a solution. Also: test your backups.

[-] darcy@sh.itjust.works 1 points 1 year ago

dont keep anything u would be upset to lose /s

[-] A1kmm@lemmy.amxl.com 1 points 1 year ago* (last edited 1 year ago)

I use Restic, called from cron, with a password file containing a long randomly generated key.

I back up with Restic to a repository on a different local hard drive (not part of my main RAID array), with --exclude-caches as well as excluding lots of files that can easily be re-generated / re-installed/ re-downloaded (so my backups are focused on important data). I make sure to include all important data including /etc (and also backup the output of dpkg --get-selections as part of my backup). I auto-prune my repository to apply a policy on how far back I keep (de-duplicated) Restic snapshots.

Once the backup completes, my script runs du -s on the backup and emails me if it is unexpectedly too big (e.g. I forgot to exclude some new massive file), otherwise it uses rclone sync to sync the archive from the local disk to Backblaze B2.

I backup my password for B2 (in an encrypted password database) separately, along with the Restic decryption key. Restore procedure is: if the local hard drive is intact, restore with Restic from the last good snapshot on the local repository. If it is also destroyed, rclone sync the archive from Backblaze B2 to local, and then restore from that with Restic.

Postgres databases I do something different (they aren't included in my Restic backups, except for config files): I back them up with pgbackrest to Backblaze B2, with archive_mode on and an archive_command to archive WALs to Backblaze. This allows me to do PITR recovery (back to a point in accordance with my pgbackrest retention policy).

For Docker containers, I create them with docker-compose, and keep the docker-compose.yml so I can easily re-create them. I avoid keeping state in volumes, and instead use volume mounts to a location on the host, and back up the contents for important state (or use PostgreSQL for state instead where the service supports it).

[-] HR_Pufnstuf@lemmy.world 0 points 1 year ago

ZFS send / recieve and snapshots.

[-] pound_heap@lemm.ee 1 points 1 year ago

Does this method allow to pick what you need to backup or it's the entire filesystem?

[-] HR_Pufnstuf@lemmy.world 2 points 1 year ago

It allows me to copy select datasets inside the pool.

So I can choose rpool/USERDATA/so-n-so_123xu4 for user so-n-so. I can also choose copy copy some or all of the rpool/ROOT/ubuntu_abcdef, and it's nested datasets.

I settle for backing up users and rpool/ROOT/ubuntu_abcdef, ignoring the stuff in var datasets. This gets me my users home, roots home, /opt. Tis all I need. I have snapshots and mirrored m2 ssd's for handling most other problems (which I've not yet had).

The only bugger is /boot (on bpool). Kernel updates grown in there and fill it up, even if you remove them via apt... because snapshots. So I have to be careful to clean it's snapshots.

Used to use Duplicati but it was buggy and would often need manual intervention to repair corruption. I gave up on it.

Now use Restic to Backblaze B2. I've been very happy.

[-] jamiehs@lemmy.ml 2 points 1 year ago

I’ve used restic in the past; it’s good but requires a great deal of setup if memory serves me correctly. I’m currently using Duplicati on both Ubuntu and Windows and I’ve never had any issues. Thanks for sharing your experience though; I’ll be vigilant.

[-] sxan@midwest.social -2 points 1 year ago

Restic to B2 is made of win.

The quick, change-only backups in a digit executable intrigued me; the ability to mount snapshots to get at, e.g., a single file hooked me. The wide, effortless support for services like BackBlaze made me an advocate.

I back up nightly to a local disk, and twice a week to B2. Everywhere. I have some 6 machines I do this on; one holds the family photos and our music library, and is near a TB by itself. I still pay only a few dollars per month to B2; it's a great service.

[-] mariom@lemmy.world 0 points 1 year ago

Is it just me or the backup topic is recurring each few days on !linux@lemmy.ml and !selfhosted@lemmy.world?

To be on topic as well - I use restic+autorestic combo. Pretty simple, I made repo with small script to generate config for different machines and that's it. Storing between machines and b2.

[-] grue@lemmy.ml 0 points 1 year ago

It hasn't succeeded in nagging me to properly back up my data yet, so I think it needs to be discussed even more.

[-] j12345@boulder.ly -1 points 1 year ago

Get a Mac, use Time Machine. Go all in on the eco system. phone, watch, iPad, tv. I resisted for years but it's so good man and the apple silicon is just leaps beyond everything else.

[-] redcalcium@lemmy.institute -1 points 1 year ago

Good ol' fashioned rsync once a day to a remote server with zfs with daily zfs snapshot (rsync.net). Very fast because it only need to send changed/new files, and saved my hide several times when I need to access deleted files or old version of some files from the zfs snapshots.

this post was submitted on 25 Jul 2023
29 points (100.0% liked)

Linux

47606 readers
1152 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS