Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
parallel_rsync [20.05.2020 19:32] – [known issues] Pascal Suter | parallel_rsync [20.05.2020 19:44] (current) – [Before we get startet] Pascal Suter | ||
---|---|---|---|
Line 21: | Line 21: | ||
how many jobs should run in parallel and how many directories deep you want to parallellize your jobs really depends on your sepcific situation. if you have several terabytes of data and you do a complete sync it makes sense to dive deeper into the structure than when you just want to update an already existing copy of the same data, in that case it might be faster to only dive 1 to 2 levels deep into your structure or even not use this script at all, when most of the time is spend by " | how many jobs should run in parallel and how many directories deep you want to parallellize your jobs really depends on your sepcific situation. if you have several terabytes of data and you do a complete sync it makes sense to dive deeper into the structure than when you just want to update an already existing copy of the same data, in that case it might be faster to only dive 1 to 2 levels deep into your structure or even not use this script at all, when most of the time is spend by " | ||
+ | in the second version I have added the possibility to optionally pass a 5th and 6th argument. A filename can be passed as $5. If the file does not exist, the initial directory list which resulted from the '' | ||
+ | a second filename can be passed optionally as $6. prsync will save its progress to that file. if prsync is re-run, this file will be checked before the start of each rsync progress. in case the directory that was supposed to be rsynced is already on the list, it will be skipped. this can prevent re-running rsync for a large number of already synced directories to speed up resuming after an interrupted previous prsync run. | ||
+ | these two optional options should only be used if the source does not change between prsync runs. It is specially beneficial if the source storage in unstable and may crash after a certain period of time. using these two files will help to prevent unnecessary file scanning and comparing when resuming the prsync operation after a crash and hence help to advance the progress faster by minimizing unnecessary load on the storage. | ||
==== the code ==== | ==== the code ==== | ||
Line 30: | Line 33: | ||
# | # | ||
# version 1: initial release in 2017 | # version 1: initial release in 2017 | ||
- | # version 2: removed the need to escape filenames by using null delimiter + xargs to run commands such as mkdir and rsync, | + | # version 2: May 2020, removed the need to escape filenames by using |
- | # added ability to resume without rescanning (argument $5) and to skip already synced directories (argument $6) | + | # |
+ | # added ability to resume without rescanning (argument $5) and to skip | ||
+ | # | ||
# | # | ||
Line 40: | Line 45: | ||
# $4 = numjobs | # $4 = numjobs | ||
# $5 = dirlist file (optional) --> will allow to resume without re-scanning the entire directory structure | # $5 = dirlist file (optional) --> will allow to resume without re-scanning the entire directory structure | ||
- | | + | |
source=$1 | source=$1 | ||
destination=$2 | destination=$2 | ||
Line 209: | Line 214: | ||
rm -rf / | rm -rf / | ||
- | ===== Before we get startet | + | ===== Doing it manually ===== |
+ | Initially i did this manually to copy data from an old storage to a new one. when I later had to write a script to archive large directories with lots of small files, I decided to writhe the above function. So for those who are interested in reading more about the basic method and don't like my bash script, here is the manual way this all originated from :) | ||
+ | |||
+ | ==== Before we get startet ==== | ||
one important note right at the begining: while parallelizing is certainly nice we have to consider, that spinning harddisks don't like concurrent file access. so be prepared to never ever see your harddisks theoretical throughput reached if you copy lots of small files. | one important note right at the begining: while parallelizing is certainly nice we have to consider, that spinning harddisks don't like concurrent file access. so be prepared to never ever see your harddisks theoretical throughput reached if you copy lots of small files. | ||
make sure you don't run too many parallel rsyncs by checking your cpu load with top. if you see the " | make sure you don't run too many parallel rsyncs by checking your cpu load with top. if you see the " |