What's my what lmao?
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Backups? What backups?
Ik it's bad but I can't be bothered.
Dis me
🤞
Cross my fingers 🤞
Are cyanide tablets a backup strategy?
Restic using resticprofile for scheduling and configuring it. I do frequent backups to my NAS and have a second schedule that pushes to Backblaze B2.
Another +1 for restic. To simplify the backup I am however using https://autorestic.vercel.app/, which is triggered from systemd timers for automated backups.
I run a restic backup to a local backup server that syncs most of the data (except the movie collection because it's too big). I also keep compressed config/db backups on the live server.
I eventually want to add a cloud platform to the mix, but for now this setup works fine
Restic is great! I run it in a container using mazzolino/restic
image hooked up to Backblaze for all my important stuff!
Highly recommend borgbackup, I've been using it for years and it's always been smooth
3 2 1 with Restic and B2
Restic is so awesome and in combination with backblaze it’s probably the most cost effective solution.
I realized at one point that the amount of data that is truly irreplaceable to me amounts to only - 500GB. So for this important data I back up to my NAS, then from there backup to Backblaze. I also create M-Discs. Two sets, one for home and one I keep at a fiends’ place. Then because “why not” and I already had them sitting around I also backup to two sd cards and keep them on site and off site.
I also backup my other data like tv/movies/music/etc but the sheer volume of data gives me one option, that being a couple usb hard drives I back up to from my NAS.
Irreplaceable media: NAS->Back blaze NAS->JBOD via duplicacy for versioning
Large ISOs that can be downloaded again, NAS -> JBOD and or NAS -> offline disks.
Stuff that's critical leaves the house, stuff that would just cost me a hell of a lot of personal time to rebuild just gets a copy or two.
Can anyone ELI5 or link a decent reference? I'm pretty new to self hosting and now that I've finally got most of my services running the way I want, I live in constant fear of my system crashing
I use borgbackup + zabbix for monitoring.
At home, I have all my files get backed up to rsync.net since the price is lower for borg repos.
At work, I have a dedicated backup server running borgbackup that pulls backups from my servers and stores it locally as well as uploading to rsync.net. The local backup means restoring is faster, unless of course that dies.
I usually write my own scripts with rsync
for backups since I already have my OS installs pretty much automated also with scripts.
I'm paying Google for their enterprise gSuite which is still "unlimited", and using rclone's encrypted drive target to back up everything. Have a couple of scripts that make tarballs of each service's files, and do a full backup daily.
It's probably excessive, but nobody was ever mad about the fact they had too many backups if they needed them, so whatever.
Large/important volumes on SAN-> B2.
Desktop Macs -> Time Machine on SAN & Backblaze (for a few)
Borgbackup is great and what we used for all our servers when they were pets. It's a great tool, very easy to script and use.
I have a raspberry pi with an external drive with scripts to rsync each morning. Then I have S3 deep glacier backups for off site.
I have an external hard drive that I keep in the car. I bring it in once a month and sync it with the server. The data partition is encrypted so that even if it were to get stolen, the data itself is safe.
I have a similar 321 strategy without using someone else's server and needing to traverse the internet. I keep my drive in the pool shed, since if my house was to blow up or get robbed, the shed would probably be fine.
restic + rclone crypt + whatever storage server/service is good enough. currently using hetzner storage for my backups. because they've auto snapshots on top of my backups.
I also use this setup for backups on servers, not only at home
Personal files: Syncthing between all devices and a TrueNAS Scale NAS. TrueNAS does snapshots 4 times a day, with a retention policy of 30 days. From there, a nightly sync to Backblaze B2 happens, also with a 30 day retention policy. Occasional manual backups to external drives too.
Homelab/Servers: Proxmox VM and LXC container exports nightly to TrueNAS, with a retention policy of 7 days. A separate weekly export happens to a separate TrueNAS share, that gets synced to B2 weekly, with a retention policy of 30 says. Also has occasional external drive backups.
Am I the only one using kopia :)?
Im quite new in selfohsting and backups. I went for duplicaty and it is perfect, but heared bad stories and now I use kopia daily backups to another drive and also to B2. Duplicaty is still doing daily backups, but only few important folders to google drive.
Ive heared only good stories about kopia and no one mentioned it
there are dozens of us, dozens!
A kind of "extended" 3-2-1, more a 4-3-2. As nearly everything I host runs on Docker, I usually pause the stack, .tar.bz everything and back that up on several devices (NAS, off-site machine, external HDD).
The neat thing about keeping every database in its own container is the resulting backup "package", which can easily be restored as a whole without having to mess with db dumps, permissions, etc.
I use Borgbackup 1.2.x. It works really well. Significantly faster than Duplicity. Borg uses block-level deduplication instead of doing incremental backups, meaning the backup won't grow indefinitely like with duplicity (this is why you have to periodically do a full backup with Duplicity). The Borg server has an "append-only" mode meaning the client can only add data to the backup and not remove it - this is useful because if an attacker were to gain access to the client, they can't delete all your backups. This is a common issue with other backup systems - the client has full access to the backup, so there's nothing stopping an attacker from erasing the client system plus all its backups.
For storing the backups, I have two storage VPSes - One with HostHatch in Los Angeles ($10/month for 10TB space) and one with Servarica in Montreal Canada (3.5GB space for $84/year).
Each system being backed up performs the backup twice - Once to each VPS. Borgbackup recommends this approach over only performing one backup then rsyncing it to a different server. The idea is that if one backup gets corrupted (or deleted by an attacker, etc), the other one should still be OK as it's entirely separate.
All devices backup to my NAS either in realtime or at short intervals throughout the day. I use recycling bins for easy restores for accidentally deleted files.
My NAS is set up on a RAID for drive redundancy (Synology RAID) and does regular backups to the cloud for active files.
Once a day I do a hyperbackup to an external HDD.
Once a month I backup to an external drive that lives offsite.
Backups to these external HDDs have versioning, so I can restore files from multiple months ago, if needed.
The biggest challenge is that as my NAS grows, it costs significantly more to expand my backups space. Cloud storage and new external drives aren't cheap. If I had an easy way to keep a separate NAS offsite, that would considerably reduce ongoing costs.
Daily offsite to a backup server via restic (+ a self written wrapper for multiple targets). Restic can also run with anything else (sftp, s3 APIs etc). Kinda modern duplicity / borg. Full encrypted and incremental.
i backup locally to a second NAS (daily)
i use rclone crypt to backup to the cloud (hetzner storage box, weekly)
the most important stuff i also backup to an external harddisk (from time to time, whenever i'm in the mood / have some spare time)
For PCs, Daily incremental backups to local storage, daily syncs to my main unRAID server, and weekly off-site copies to a raspberry pi with a large external HDD running at a family member's place. The unRAID server itself has it's config backed up to the unRAID servers and all the local docker stores also to the off-site pi. The most important stuff (pictures, recovery phrases, etc) is further backed up in Google drive.
All nextcloud data gets mirrored with rsync to a second drive, so it's in 3 places, original source and twice on the server
Databases are backed up nightly by webmin to second drive
Then installations, databases etc are sent to backblaze storage with duplicati
I use....
- Timeshift ->Local backup on to my RAID array
- borgbackup -> borgbase online backup
- GlusterFS -> experimenting with replicating certain apps across 2 raspberry pi's
Holy crap. Duplicity is what I've been missing my entire life. Thank you for this.
Personally I do:
- Daily snapshots of my data + Daily restic backup on-site on a different machine
- Daily VM/containers snapshot locally and on a different machine, keeping at least 2 monthly, 2 weekly and 2 daily backups
- Weekly incremental data backup in an immutable B2 bucket, with a new bucket every month and a 6 month immutability (so data can't be changed/erased for 6 month)
- Weekly incremental data backup on an other off-site machine
- Monthly (but I should start doing it weekly) backup of important data (mainly documents and photos) on removable medias that I keep offline in a fire-proof safe
Maybe it's overkill, maybe it's not enough, I'll know when something fail and I am screwed, ahah
As a note, everybody should test/check their backup frequently. I once had an issue after changing an IP address and figured out half my backups where not working 6 month later...
I run all of my services in containers, and intentionally leave my Docker host as barebones as possible so that it's disposable (I don't backup anything aside from data to do with the services themselves, the host can be launched into the sun without any backups and it wouldn't matter). I like to keep things simple yet practical, so I just run a nightly cron job that spins down all my stacks, creates archives of everything as-is at that time, and uploads them to Wasabi, AWS S3, and Backblaze B2. Then everything just spins back up, rinse and repeat the next night. I use lifecycle policies to keep the last 90 days worth of backups.
My critical files and folders are synced from my mas to my desktop using syncthing. From there I use backblaze to do a full desktop backup nightly.
My Nas is in raid 5, but that's technically not a backup.
All my servers use ZFS for data storage, they have VPN's between each other (just /30 P2P's). I use zfs-snapshot to take snapshots every 15 minutes and nightly jobs that do a ZFS send to dump everything to another machine with some storage.
I just wanted to say that I appreciate the input from everyone here. I really need to work on my backup solution and this will be helpful.
Main NAS backups to Secondary NAS (onsite - 10G link). Secondary NAS backups up to Offsite (Hetzner server) weekly. Only important data, not Linux ISOs etc.
I usually just use Restic (not just for servers), for big databases i pipe pg_dump directly into it, and for even bigger ones, i recently moved to pgBackRest.
I ping to a selfhosted Healthchecks instance to see if my backups still run. (or the other way around)
On my main desktop (which recently became a mac, i am sorry) – i currently use Autorestic for multiple locations... its nice to have that yaml, but – well – i am used to bashscripts anyway so it is not that big of a benefit i guess.. .
I back up everything to my home server... then I run out of money and cross my fingers that it doesn't fail.
Honestly though my important data is backed up on a couple of places, including a cloud service. 90% of my data is replaceable, so the 10% is easy to keep safe.
I use a Backuppc instance hosted on an off site server with a 1Tb drive. It connects through ssh to all my vms and backups /home and any other folders i may need. It handles full and incremental backups, deduplication, and compression.