cinder ceph incremental backup taking full backup instead of incremental

Bug #1747601 reported by maestropandy
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Cinder
New
Undecided
Unassigned

Bug Description

Feels still issues persist at pike/devstack, I have taken first time full backup as "first" and again taken full backup as"second" and taken third backup with incremental as "third",see below

ubuntu@slave:~/devstack$ cinder backup-list
+--------------------------------------+--------------------------------------+-----------+--------+------+--------------+-----------+
| ID | Volume ID | Status | Name | Size | Object Count | Container |
+--------------------------------------+--------------------------------------+-----------+--------+------+--------------+-----------+
| 1c208286-d646-42ba-a641-3f794ed828ca | 1ad556f3-6190-4658-9bb3-8ec72eca6ead | available | second | 1 | 0 | backups |
| 8258e3bd-e045-48d6-9fb3-660b5689fafa | aaa08d8f-9015-4817-a033-c4b41ea36f1f | available | pandy1 | 1 | 0 | backups |
| 84a4a180-f4a1-419f-9e79-67ada8b5b4d7 | 1ad556f3-6190-4658-9bb3-8ec72eca6ead | available | first | 1 | 0 | backups |
| dc29b3aa-914b-41a1-b45f-333a95d90df2 | 1ad556f3-6190-4658-9bb3-8ec72eca6ead | available | third | 1 | 0 | backups |
+--------------------------------------+--------------------------------------+-----------+--------+------+--------------+-----------+
ubuntu@slave:~/devstack$ cinder backup-show third
+-----------------------+--------------------------------------+
| Property | Value |
+-----------------------+--------------------------------------+
| availability_zone | nova |
| container | backups |
| created_at | 2018-02-05T18:19:52.000000 |
| data_timestamp | 2018-02-05T18:19:52.000000 |
| description | None |
| fail_reason | None |
| has_dependent_backups | False |
| id | dc29b3aa-914b-41a1-b45f-333a95d90df2 |
| is_incremental | True |
| name | third |
| object_count | 0 |
| size | 1 |
| snapshot_id | None |
| status | available |
| updated_at | 2018-02-05T18:33:21.000000 |
| volume_id | 1ad556f3-6190-4658-9bb3-8ec72eca6ead |
+-----------------------+--------------------------------------+

Where checked in ceph rbd, ran sudo rbd du -p backups

NAME PROVISIONED USED
volume-1ad556f3-6190-4658-9bb3-8ec72eca6ead.backup.1c208286-d646-42ba-a641-3f794ed828ca 1024M 1024M
<email address hidden> 1024M 0
volume-1ad556f3-6190-4658-9bb3-8ec72eca6ead.backup.base 1024M 0
volume-1ad556f3-6190-4658-9bb3-8ec72eca6ead.backup.dc29b3aa-914b-41a1-b45f-333a95d90df2 1024M 1024M
<email address hidden> 1024M 28672k
volume-aaa08d8f-9015-4817-a033-c4b41ea36f1f.backup.base 1024M 0
<TOTAL> 4096M 2076M

Note: "Third" backup along with original volume ID is 1024M, as below

volume-1ad556f3-6190-4658-9bb3-8ec72eca6ead.backup.dc29b3aa-914b-41a1-b45f-333a95d90df2 1024M 1024M

"Second" Backup also same

volume-1ad556f3-6190-4658-9bb3-8ec72eca6ead.backup.1c208286-d646-42ba-a641-3f794ed828ca 1024M 1024M

Revision history for this message
Gorka Eguileor (gorka) wrote :

This works in master just fine, and the only fix we have added to the Ceph backup driver since the latest stable/pike is the problem when you delete one of the backups [1] that is still waiting for reviews for the Pike backport.

Here are the 3 backups I've done from the same Ceph volume.

+--------------------------------------+--------------------------------------+-----------+-------+------+--------------+-----------+
| ID | Volume ID | Status | Name | Size | Object Count | Container |
+--------------------------------------+--------------------------------------+-----------+-------+------+--------------+-----------+
| 1594389c-0b96-4e01-92b3-bd4b2fb2d7c2 | 4bab675e-eeb8-4cdf-a2b0-9452bd732f5b | available | back2 | 1 | 0 | backups |
| 64dd5016-b1f1-489b-a5c2-2499bd361ea4 | 4bab675e-eeb8-4cdf-a2b0-9452bd732f5b | available | back3 | 1 | 0 | backups |
| a992d783-4ce8-4165-be70-de7b843865aa | 4bab675e-eeb8-4cdf-a2b0-9452bd732f5b | available | back | 1 | 0 | backups |
+--------------------------------------+--------------------------------------+-----------+-------+------+--------------+-----------+

And here is the storage info:

NAME PROVISIONED USED
<email address hidden> 1024M 32768k
<email address hidden> 1024M 0
<email address hidden> 1024M 0
volume-4bab675e-eeb8-4cdf-a2b0-9452bd732f5b.backup.base 1024M 0
<TOTAL> 1024M 32768k

[1] https://review.openstack.org/#/q/Ia9c29bb720152d42bec273202fa49ca4b6a41ce2

tags: added: ceph rbd
Revision history for this message
maestropandy (maestropandy) wrote :

@gorka : For ocata, if i want, after doing commit changes https://github.com/openstack/cinder/blob/stable/ocata/cinder/backup/drivers/ceph.py#L587 will work ? Seems, but it was abandoned being back ported.

Can we use this in ocata release with those changes or any compatibility issues will happen ?

Revision history for this message
Gorka Eguileor (gorka) wrote :

I think it would work, and I don't think there will be any compatibility issues, but you can test this in a lab setup and validate that the basic creation and deletion works as expected.

Revision history for this message
Boxiang Zhu (bxzhu-5355) wrote :

@maestropandy, did you backup a volume which is in-use? If you want to do a incremental backup of a in-use volume, at last, it will do a full-backup now.

Revision history for this message
Boxiang Zhu (bxzhu-5355) wrote :

BTW, is your backup driver ceph?

Revision history for this message
Boxiang Zhu (bxzhu-5355) wrote :

Someone has fixed it recently.
Refer to https://bugs.launchpad.net/cinder/+bug/1578036

Revision history for this message
leemiracle (leemiracle) wrote :

if backup.snapshot_id is not none, the ceph backup is always to do_full_backup
file name path:cinder/backup/drivers/ceph.py
method: CephBackupDriver.backup

information type: Public → Public Security
Revision history for this message
Jeremy Stanley (fungi) wrote :

When switching a bug from Public to Public Security, please explain why you suspect it represents a security vulnerability so the OpenStack Vulnerability Management team can assess the need for a possible security advisory.

information type: Public Security → Public
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.