Paramiko backend: delete() always fails if --num-retries > 1

Bug #1115715 reported by Tilman Blumenbach
56
This bug affects 9 people
Affects Status Importance Assigned to Milestone
Duplicity
Fix Released
Medium
Unassigned

Bug Description

Duplicity version: 0.6.21
Python version: 2.7.3
OS: Arch Linux
Target filesystem: Linux/ext4

The paramiko backend's delete() method always fails if --num-retries is set to something bigger than 1 (which it is by default). This means that actions like remove-all-but-n-full always fail as well, making them useless as they don't delete all files (they fail after the first set of files passed to delete() has been deleted).

Explanation: delete() gets passed a list of files, which it successfully deletes. However, it then tries to delete all of them *again* ((num-retries - 1) times), which obviously fails since they have just been deleted.

I have linked a branch with a fix and attached a logfile.

Related branches

Revision history for this message
Tilman Blumenbach (tblue) wrote :
Revision history for this message
Kelly Black (kelly-2) wrote :

I have this issue, and found the same thing. I do a full every 30 days and clean up any older than 30 days. The problem with the default of --num-retries > 1 default is that we only get to clean up 1 day as we either error out or segfault (sometimes). We keep stacking up old backups as we don't remove more than 1 per backup run, but we are adding 1 backup per backup run for a daily run.

I have changed my script to use --num-retries 1, and now things work fine.

I am using:
Duplicity version 0.6.21
scp back end.

I have tried with --extra-clean --force and found that they are not enough to get the job done. If I do the clean-up on the disk where the files reside using the file back end, things work. This seems specific to the scp back end.

Here are examples I tried:
duplicity --allow-source-mismatch --ssh-askpass remove-older-than 30D --extra-clean --no-encryption --force scp://user@host//path/host//filesystem

This would return to the shell with 50 for an error for the file not found, or 139 if the program segfaulted (happened sometimes, but not always).

If I added --num-retries 1 after the --extra-clean, it works fine.

Revision history for this message
Andreas Nüßlein (nutz) wrote :

Gee... finally found that asshole of a bug.

What happened was, that "delete" tried to delete a file global.num_retries-often - even when it was deleted successfully the first time.

Attached a patch. Please include it soon - I know a bunch of people have this problem.

Revision history for this message
Die Antwort (b-reg) wrote :

Nice catch!
--num-retries 1
worked for me as a temporary fix: remove-all-but-n-full finally worked as expected on an scp-backend.

Changed in duplicity:
status: New → Fix Committed
Changed in duplicity:
importance: Undecided → Medium
milestone: none → 0.6.22
Revision history for this message
Horst Schirmeier (horst) wrote :

The patch works for me, unfortunately hasn't found its way into Ubuntu 13.04's duplicity-0.6.21 package yet. Thanks, Andreas!

Revision history for this message
Horst Schirmeier (horst) wrote :

Unfortunately still present in Ubuntu 13.10 (0.6.21-0ubuntu4.1).

Changed in duplicity:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.