rethinking_my_backup_strategy

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revisionBoth sides next revision
rethinking_my_backup_strategy [01.01.2021 18:51] – [Design Goals for Mobi 2.0] Pascal Suterrethinking_my_backup_strategy [01.01.2021 22:39] – [self-made collection of other tools] Pascal Suter
Line 63: Line 63:
   * use rsync daemon on the server to provide access to the backup repos for each of the clients.    * use rsync daemon on the server to provide access to the backup repos for each of the clients. 
   * use ''rsync --link-dest'' or ''cp -alx'' to first create a fully hard-linked copy to the last successful backup and then share this via rsync daemon for the client to then update the changed file in this repo.. this should probably result in a similar backup structure as my current moby script does, but with the added separation of client and server.    * use ''rsync --link-dest'' or ''cp -alx'' to first create a fully hard-linked copy to the last successful backup and then share this via rsync daemon for the client to then update the changed file in this repo.. this should probably result in a similar backup structure as my current moby script does, but with the added separation of client and server. 
-  * provide a read-only share via rsync daemon where the client can access all its backups to restore files from. --> **this needs some more thinking / research**, as the backups will contain encrpyted file- and directory names as well as data.. so we would need some other means of sharing the backups in read-only mode but that will retain the orignal linux permissuons upon restore. the share should be mountable on the client, so that we can use again gocryptfs to decrypt the backup before restoring files. Maybe NFS piped through ssh or something similar might be a solution. +  * provide a read-only share via rsync daemon where the client can access all its backups to restore files from. --> **this needs some more thinking / research**, as the backups will contain encrpyted file- and directory names as well as data.. so we would need some other means of sharing the backups in read-only mode but that will retain the orignal linux permissions upon restore. the share should be mountable on the client, so that we can use again gocryptfs to decrypt the backup before restoring files. Maybe NFS piped through ssh or something similar might be a solution. 
   * use the same set of tools again to create backups from the primary backup server to the secondary.    * use the same set of tools again to create backups from the primary backup server to the secondary. 
  
 Unsolved issues of this solution:  Unsolved issues of this solution: 
-  * **file ownership** is retained on all the files, so a file belonging to root on the client will belong to root on the backup server.. this brings some security issues, as for example a privilege escalation could be made possible by backing up a copy of bash belonging to root and with the suid bit set.. once the attacker gets unprivileged user access to the backup server, he could start this shell and become root. So it would be preferrable to change at least file ownership to a dedicated user and limit the possibilities for an attack+  * **file ownership** is retained on all the files, so a file belonging to root on the client will belong to root on the backup server.. this brings some security issues, as for example a privilege escalation could be made possible by backing up a copy of bash belonging to root and with the suid bit set.. once the attacker gets unprivileged user access to the backup server, he could start this shell and become root. So it would be preferable to change at least file ownership to a dedicated user and limit the possibilities for an attack. [[https://github.com/linuxkit/linuxkit/tree/master/projects/shiftfs|shiftfs]] in combination with ''unshare'' to create a linux user namespace could be a solution here. 
   * **restoring files and browsing backups** needs to be simple. for example it should be possible to either use normal ''rsync -l'' or even better, to mount complete backups from the backup server onto the client server and then browse through them. however, this is currently not so simple because:    * **restoring files and browsing backups** needs to be simple. for example it should be possible to either use normal ''rsync -l'' or even better, to mount complete backups from the backup server onto the client server and then browse through them. however, this is currently not so simple because: 
     * backups are encrypted before rsync lays a hand on the file, so ''rsync -l'' will list encrypted file- and directory names and it will download encrypted files which will then need to be decrypted.. so finding the latest version of a file that contains a string X for example is very cumbresome      * backups are encrypted before rsync lays a hand on the file, so ''rsync -l'' will list encrypted file- and directory names and it will download encrypted files which will then need to be decrypted.. so finding the latest version of a file that contains a string X for example is very cumbresome 
     * it would be nice to be able to mount an entire backup, or even all backups at once, via for example sshfs. One could then remount it using gocryptfs on the client to see a decrypted representation. however, this brings another isse: the mount should be read-only, so that a hacked client can't destroy existing backups on the backup server. so either we find a way to create a read-only share using for example NFS (possibly tunnelled over ssh) or we find a way to make them read-only on the backup server already before sharing them through sshfs.      * it would be nice to be able to mount an entire backup, or even all backups at once, via for example sshfs. One could then remount it using gocryptfs on the client to see a decrypted representation. however, this brings another isse: the mount should be read-only, so that a hacked client can't destroy existing backups on the backup server. so either we find a way to create a read-only share using for example NFS (possibly tunnelled over ssh) or we find a way to make them read-only on the backup server already before sharing them through sshfs. 
     * i have found [[https://github.com/zaddach/fuse-rsync|fuse-rsync]] which allows mounting an rsync module via a fuse mount. however, this is merely a proof of concept that has not been developed any further in the past 7 years, so not really an option here.      * i have found [[https://github.com/zaddach/fuse-rsync|fuse-rsync]] which allows mounting an rsync module via a fuse mount. however, this is merely a proof of concept that has not been developed any further in the past 7 years, so not really an option here. 
 +
 +===== First POC - Burp + rsync =====
 +with all the arguments above considered, I decided to proceed a burp based solution and just add off-site capabilities to burp. Here is the targeted setup: 
 +  * "Local" backup server running burp in server mode with the following key settings: 
 +    * ''hardlinked_archive = 1''
 +    * ''client_can_delete = 0''
 +    * ''user=jdoe'' and ''group=jdoe'' where ''jdoe'' is some unprivileged non-root user
 +    * one needs to make sure that all the necessary paths mentioned in ''burp-server.conf'' and ''CA.cnf'' are writable and or readable by the unprivileged user who's running burp
 +  * "Remote" backup server, running a rsyncd service which shares a single directory i.e. ''/backups/current'' 
 +  * clients run the burp client and use client-side encryption with a strong password. the following additional core settings are used: 
 +    * ''server_can_restore = 0''
 +    * ''server_can_override_includes = 0''
 +  * a script on the burp server uses ''rsync -aAhHvXxR --numeric-ids --delete /var/spool/burp/./*/current/ rsync://user@offsite/current0'' to write backups to the offsite server
 +  * on the offsite server, a script is called (somehow, haven't figured out yet how exactly this will be done) after the rsync from the burp server successfully finished. the script will use ''cp -alx /backups/current /backups/`date +%Y.%m.%d-%H%M`'' to create hardlinked copies of the current directory. by using this script, we can avoid to use the ''--link-dest'' option of rsync which in turn would make it necessary to at least include the latest completed backup also in the writable share.
  
  
  • rethinking_my_backup_strategy.txt
  • Last modified: 12.08.2021 17:42
  • by Pascal Suter