Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
mobi_2.0 [31.12.2020 00:28] – created Pascal Suter | mobi_2.0 [01.01.2021 18:45] (current) – removed Pascal Suter | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Mobi 2.0 ====== | ||
- | Mobi stands for "My Own Backup Implementation", | ||
- | |||
- | You can read all about my first aim on creating the " | ||
- | |||
- | Mobi 2.0 will probably be a complete new set of scripts and it won't contain much or anything of the first Mobi script I wrote. | ||
- | |||
- | ===== Why a complete new MOBI? ===== | ||
- | |||
- | The main reason for a complete re-write of my old Mobi tool is that I want to better protect my data from ransomware and similar attacks as well as targeted hacking attacks, where a hacker (or a group of hackers) manually hijack a server and then try to cause damage to the owner of the server by messing with the data on it. | ||
- | |||
- | Mobi is not very secure in this regard, as it usually runs on a backup server that needs full root access to all backup clients it should pull backups from. So if someone hijacks the Backup server that person will automatically have password-less root access to any of the Linux systems it backs up, which is really bad to be honest! | ||
- | |||
- | On my private server, mobi is running on the client server itself and the backup is stored to another set of local disks.. that's also not very good, as ransomware would then encrypt both my data and my backup at once, so even the backup will be rendered useless.. even worse, since unchanged files between backups are hardlinked instead of copied, encrypting all the backups will be extremely fast, as one only has to encrypt each version of a file once. | ||
- | |||
- | Mobi does a great job to protect against accidential data loss, data loss due to hardware issues like multiple disk failures, loss of complete raidsets etc. and also in cases where the client server is hacked but the backup server is not.. since the backup is completely controlled from the backup server, there isn't anything a hacker can do on a client server (the one being backed up) to mess up the backup from there. | ||
- | |||
- | Since there have recently been an increasing number of reports of targeted hacking attacks on companies in my vincinity (meaning, switzerland in general, or simliar fields of operation, customers or direct compteditors of customers etc.) I realized it's time to re-think how i do backups and to set new goals as far as security goes. | ||
- | |||
- | the main goal of Mobi was to be as portable and simple as possible and to provide incremental backups where i had full snapshots of each backup in a simple folder structure to facilitate restores and make browsing of backups easy. | ||
- | |||
- | ===== Design Goals for Mobi 2.0 ===== | ||
- | * No more root access should be necessary between any of the involved machines! | ||
- | * untrusted client machines --> we have to assume, that the machine we are creating a backup for could be hacked, so we have to make sure, that not even root on a client machine could delete or tamper with previous benchmarks. We will assume however, that the retention time for the backups is longer than the time the client machine has been hacked, meaning, we don't need to detect if the client is hacked or not.. we will continue to make backups for as long as the hacker wants to and hope that the admin of the hacked client will notice the attack before his oldest backup is purged from the repository | ||
- | * untrusted servers: the backup server can't be trusted.. we have to assume that the backup server could be hacked and have to take precautions, | ||
- | * backups of a backup should be possible and should follow the same principles as mentioned above: Mobi 2.0 should at least make it possible, to create a secondary backup from the backup server to an offsite backup server, so that a local backup can be kept in a local network and a remote backup can be kept for disaster recovery like after a fire or water damage or similar. the secondary backup may lag slightly behind the main backup. It should be created by copying data from the backup server to the secondary backup server, not by just creating a second backup from the client directly. | ||
- | * shortest expected backup intervals are daily backups.. however, it wouldn' | ||
- | * we only want to backup from linux to linux | ||
- | * we only backup servers, so we can assume that both the client as well as the server are online 24/7, so scheduling by simple cron jobs or similar is enough. however a failed backup should be resumable at the next time the cron job runs again. | ||
- | * backups should be incremental, | ||
- | * there should be a way to mount or at least view any backup as if it was a full backup. Most probably this will be achieved once again through hard-linking unchanged files | ||
- | * the tool should be as simple to install as possible.. either by just copying a simple script and installing some very common linux tools like rsync etc. or by making it available as a docker container or similar | ||
- | * we accept the fact, that there will most probably be a client and a server side script that needs to be put in place, running it all form one side won't be possible due to the security concerns above | ||
- | |||
- | ===== discussion of available tools and solutions ===== | ||
- | i have discussed some tools mainly regarding encrypted backups already in [[encrypted_backups_to_the_cloud]]. In addition to that i have looked at some other tools for this project: | ||
- | |||
- | ==== Borg ==== | ||
- | [[https:// | ||
- | ==== Burp ==== | ||
- | [[https:// | ||
- | So maybe my final solution could be writing a offsite-bakckup for burp to comlete the requried feature set for me :) | ||
- | ==== Restic ==== | ||
- | [[https:// | ||
- | |||
- | |||
- | |||
- | ===== possible solutions ===== | ||
- | ==== Burp ==== | ||
- | as stated above, use burp, maybe in hardlink mode or with a BTRFS storage underneath and then create a custom off-site backup (by using BTRFS Snapshots, or some fancy rsync methods). | ||
- | |||
- | ==== self-made collection of other tools ==== | ||
- | so "self made" is a bit flexible here.. what i mean is a larger script that will use a combination of several tools together: | ||
- | * rsync as the main tool to copy data from the client to the server | ||
- | * [[https:// | ||
- | * use rsync daemon on the server to provide access to the backup repos for each of the clients. | ||
- | * use '' | ||
- | * provide a read-only share via rsync daemon where the client can access all its backups to restore files from. **--> this needs some more thinking / research, as the backups will contain encrpyted file- and directory names as well as data.. so we would need some other means of sharing the backups in read-only mode but that will retain the orignal linux permissuons upon restore. the share should be mountable on the client, so that we can use again gocryptfs to decrypt the backup before restoring files. Maybe NFS piped through ssh or something similar might be a solution. | ||
- | * use the same set of tools again to create backups from the primary backup server to the secondary. | ||
- | |||