I use CrashPlan for cloud backups. In 2018 they stopped their Home solution, so I switched to their Business plan.
It works very well on Linux, Windows and Mac, but it was always a bit fickle on my QNAP NAS. There is a qpkg package for CrashPlan, and there are lots of posts on the QNAP support forum. After 2018, none of the solutions to run a backup on the NAS itself stopped working. So I gave up, and I didn’t have a backup for almost 4 years.
Now that I have mounted most of the network shares on my local filesystem, I can just run the backup on my pc. I made 3 different backup sets, one for each of the shares. There’s only one thing that I had to fix: if Crashplan runs when the shares aren’t mounted, then it thinks that the directories are empty, and it will delete the backup on the cloud storage. As soon as the shares come back online, the files are backed up again. It doesn’t have to upload all files again, because Crashplan doesn’t purge the files on it’s cloud immediately, but the file verification still happens. That takes time and bandwidth.
I contacted CrashPlan support about this issue, and this was their reply:
I do not believe that this scenario can be avoided with this product – at least not in conjunction with your desired setup. If a location within CrashPlan’s file selection is detached from the host machine, then the program will need to rescan the selection. This is in inherent drawback to including network drives within your file selection. Your drives need to retain a stable connection in order to avoid the necessity of the software to run a new scan when it sees the drives attached to the device (so long as they’re within the file selection) detach and reattach.
Since the drive detaching will send a hardware event from the OS to CrashPlan, CrashPlan will see that that hardware event lies within its file selection – due to the fact that you mapped your network drives into a location which you’ve configured CrashPlan to watch. A hardware event pointing out that a drive within the /home/amedee/Multimedia/ file path has changed its connection status will trigger a scan. CrashPlan will not shut down upon receiving a drive detachment or attachment hardware event. The program needs to know what (if anything) is still there, and is designed firmly to track those types of changes, not to give up and stop monitoring the locations within its file selection.
There’s no way around this, aside from ensuring that you either keep a stable connection. This is an unavoidable negative consequence of mapping a network drive to a location which you’ve included in CrashPlan’s file selection. The only solution would be for you to engineer your network so as not to interrupt the connection.
Nathaniel, Technical Support Agent, Code42
I thought as much already. No problem, Nathaniel! I found a workaround: a shell script that checks if a certain marker file on the network share exists, and if it doesn’t, then the script stops the CrashPlan service, which will prevent CrashPlan from scanning the file selection. As soon as the file becomes available again, then the CrashPlan service is started. This workaround works, and is good enough for me. It may not be the cleanest solution but I’m happy with it.
I first considered using inotifywait
, which listens to filesystem events like modifying or deleting files, or unmount. However when the network connection just drops for any reason, then inotifywait
doesn’t get an event. So I have to resort to checking if a file exists.
#!/bin/bash
file_list="/home/amedee/bin/file_list.txt"
all_files_exist () {
while read -r line; do
[ -f "$line" ]
status=$?
if ! (exit $status); then
echo "$line not found!"
return $status
fi
done < "$file_list"
}
start_crashplan () {
/etc/init.d/code42 start
}
stop_crashplan () {
/etc/init.d/code42 stop
}
while true; do
if all_files_exist; then
start_crashplan
else
stop_crashplan
fi
sleep 60
done
file_list.txt
contains a list oftestfile
s on different shares that I want to check. They all have to be present, if even only one of them is missing or can’t be reached, then the service must be stopped.
/home/amedee/Downloads/.testfile
/home/amedee/Multimedia/.testfile
/home/amedee/backup/.testfile
- I can add or remove shares without needing to modify the script, I only need to edit
file_list.txt
– even while the script is still running. - Starting (or stopping) the service if it is already started (or stopped) is very much ok. The actual startup script itself takes care of checking if it has already started (or stopped).
- This script needs to be run at startup as root, so I call it from cron (
sudo crontab -u root -e
):
@reboot /home/amedee/bin/test_cifs_shares.sh
This is what CrashPlan support replied when I told them about my workaround:
Hello Amedee,
That is excellent to hear that you have devised a solution which fits your needs!
This might not come in time to help smooth out your experience with your particular setup, but I can mark this ticket with a feature request tag. These tags help give a resource to our Product team to gauge customer interest in various features or improvements. While there is no way to use features within the program itself to properly address the scenario in which you unfortunately find yourself, as an avenue for adjustments to how the software currently operates in regards to the attachment or detachment of network drives, it’s an entirely valid request for changes in the future.
Nathaniel, Technical Support Agent, Code42
That’s very nice of you, Nathaniel! Thank you very much!
I’m in a similar situation. I just updated my set up with zfs encryption on some datasets that are backed up by Crashplan. I thought to myself, if I reboot the server, I’ll just unlock those volumes the first time I need them. Then (a few days later but not long after I got Crashplan working again) I realised Crashplan will see the empty directory when the zfs dataset is not decrypted and mounted and think I deleted all my files.
I was actually thinking something similar, but the other way around: stop the crashplan service if a marker file is present in the mount folder (this file would disappear if the dataset was actually mounted over it).
It would SO easy for Crashplan to look for a marker file of of their choosing (maybe !!crashplan-unmounted!!) to pause backups of each backup set.
Maybe I’ll open a ticket in the hope it gets a feature request tag!