Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
mobi_backup [26.09.2018 11:02] – Pascal Suter | mobi_backup [15.03.2022 16:23] (current) – [Error due to old flock version] Pascal Suter | ||
---|---|---|---|
Line 6: | Line 6: | ||
In most cases, this proves to be simple but still efficient enough, rather than trying block-level incrementals. | In most cases, this proves to be simple but still efficient enough, rather than trying block-level incrementals. | ||
- | at the end of a successful backup, a rotation is made and old backups | + | One specialty |
- | so here is the script.. use it at your own risk and let me know if you find bugs or have contributions | + | The advantage of running multiple backup jobs at once is, that you can usually reach a much higher overall throughput with multiple rsyncs running in parallel than running them one after the other because rsync is single threaded and the overhead for ssh and file checking etc. is huge. So it usually makes no sense to wait for one host to complete before backing up a second host. \\ |
+ | You could also define multiple backup jobs for the same host but different directories on the host, to increase the speed of large backups. | ||
+ | if you are looking for a solution to speed up an rsync copy process with parallel rsync invocations, | ||
+ | |||
+ | at the end of a successful backup, a rotation is made and old backups are being deleted where appropriate. also a summary email is sent to the admin. | ||
+ | ===== Configuration ===== | ||
to configure, simply edit the lines or add more blocks after the | to configure, simply edit the lines or add more blocks after the | ||
< | < | ||
Line 18: | Line 23: | ||
the script will write a hidden file named .lastdst to the backup base directory for each backup job. this file always contains the folder name of the sub directory of the last successful backup. | the script will write a hidden file named .lastdst to the backup base directory for each backup job. this file always contains the folder name of the sub directory of the last successful backup. | ||
+ | |||
+ | ===== Logs and Debugging ===== | ||
+ | the script writes multiple log files. First of all it writes a new log file for every invocation to / | ||
+ | |||
+ | ===== Known Issues ===== | ||
+ | On systems with old rsync versions (i.e. 3.0.6) and if your data contains extended Attributes or ACL's you may get lots of '' | ||
+ | ===== the script ===== | ||
+ | so here is the script.. use it at your own risk and let me know if you find bugs or have contributions to make. simply send me an email to contact at psuter dot ch. | ||
<code bash mobi.sh> | <code bash mobi.sh> | ||
Line 335: | Line 348: | ||
rm -f / | rm -f / | ||
</ | </ | ||
+ | ===== run daily ===== | ||
+ | in order to run the backup daily, run '' | ||
+ | 00 1 * * * / | ||
+ | using the redirects of both stdout and stderr to ''/ | ||
+ | |||
+ | ===== Error due to old flock version ===== | ||
+ | when this script is run on an older linux distribution such as CentOS 6.5 for example, the provided version of flock is too old to know the '' | ||
+ | sed -i 's/-E 66 //' mobi.sh | ||
+ | this will make the script work on those systems. however, since now the exit code of flock is '' | ||
===== Migration from RUBI ===== | ===== Migration from RUBI ===== | ||
Line 347: | Line 369: | ||
- '' | - '' | ||
- '' | - '' | ||
- | + | - to go the backup directory for each host and run this command < | |
+ | - remove the ":" | ||
+ | - cleanup old log files and '' | ||
+ | - remember to come back and delete the old backups when it's time. old RUBI backups won't be rotated using mobi, this needs to be done manually. mobi will only include backups into the rotation for which it finds a log file of a successful backup job. |