cinder ceph incremental backup taking full backup instead of incremental
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
New
|
Undecided
|
Unassigned |
Bug Description
Feels still issues persist at pike/devstack, I have taken first time full backup as "first" and again taken full backup as"second" and taken third backup with incremental as "third",see below
ubuntu@
+------
| ID | Volume ID | Status | Name | Size | Object Count | Container |
+------
| 1c208286-
| 8258e3bd-
| 84a4a180-
| dc29b3aa-
+------
ubuntu@
+------
| Property | Value |
+------
| availability_zone | nova |
| container | backups |
| created_at | 2018-02-
| data_timestamp | 2018-02-
| description | None |
| fail_reason | None |
| has_dependent_
| id | dc29b3aa-
| is_incremental | True |
| name | third |
| object_count | 0 |
| size | 1 |
| snapshot_id | None |
| status | available |
| updated_at | 2018-02-
| volume_id | 1ad556f3-
+------
Where checked in ceph rbd, ran sudo rbd du -p backups
NAME PROVISIONED USED
volume-
<email address hidden> 1024M 0
volume-
volume-
<email address hidden> 1024M 28672k
volume-
<TOTAL> 4096M 2076M
Note: "Third" backup along with original volume ID is 1024M, as below
volume-
"Second" Backup also same
volume-
This works in master just fine, and the only fix we have added to the Ceph backup driver since the latest stable/pike is the problem when you delete one of the backups [1] that is still waiting for reviews for the Pike backport.
Here are the 3 backups I've done from the same Ceph volume.
+------ ------- ------- ------- ------- ----+-- ------- ------- ------- ------- ------- -+----- ------+ ------- +------ +------ ------- -+----- ------+ ------- ------- ------- ------- ----+-- ------- ------- ------- ------- ------- -+----- ------+ ------- +------ +------ ------- -+----- ------+ 0b96-4e01- 92b3-bd4b2fb2d7 c2 | 4bab675e- eeb8-4cdf- a2b0-9452bd732f 5b | available | back2 | 1 | 0 | backups | b1f1-489b- a5c2-2499bd361e a4 | 4bab675e- eeb8-4cdf- a2b0-9452bd732f 5b | available | back3 | 1 | 0 | backups | 4ce8-4165- be70-de7b843865 aa | 4bab675e- eeb8-4cdf- a2b0-9452bd732f 5b | available | back | 1 | 0 | backups | ------- ------- ------- ------- ----+-- ------- ------- ------- ------- ------- -+----- ------+ ------- +------ +------ ------- -+----- ------+
| ID | Volume ID | Status | Name | Size | Object Count | Container |
+------
| 1594389c-
| 64dd5016-
| a992d783-
+------
And here is the storage info:
NAME PROVISIONED USED 4bab675e- eeb8-4cdf- a2b0-9452bd732f 5b.backup. base 1024M 0
<email address hidden> 1024M 32768k
<email address hidden> 1024M 0
<email address hidden> 1024M 0
volume-
<TOTAL> 1024M 32768k
[1] https:/ /review. openstack. org/#/q/ Ia9c29bb720152d 42bec273202fa49 ca4b6a41ce2