[SRU] error deleting cloned volumes and parent at the same time when using ceph

Bug #2083061 reported by Rodrigo Barbieri
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
New
Undecided
Unassigned
Ubuntu Cloud Archive
New
Undecided
Unassigned
Antelope
New
Undecided
Unassigned
Bobcat
New
Undecided
Unassigned
Yoga
In Progress
Undecided
Unassigned
cinder (Ubuntu)
New
Undecided
Unassigned
Jammy
New
Undecided
Unassigned

Bug Description

******* SRU TEMPLATE AT THE BOTTOM **********

Affects: bobcat and older

A race condition when deleting cloned volumes at the same time as their parent results in the volumes in error_deleting state. The reason it happens is because the code that looks for the parent in [3] may find the original volume or the "<volume>.deleted" renamed volume if the parent has been marked for deletion. The race happens because by running the deletion of both the parent and the child at the same time, the child may see the parent volume before it is marked for deletion, and then in [4] it fails to find it again because it is gone (renamed to "<volume>.deleted").

Steps to reproduce:

1) openstack volume create --size 1 v1

Wait for volume to be create and available

2) for i in {1..9}; do openstack volume create d$i --source v1 --size 1;done

Wait for all volumes to be created and available

3) openstack volume delete $(openstack volume list --format value -c ID | sort | xargs)

Some volumes may be in error_deleting state.

Workaround: Reset volume state and try to delete again.

Solutions:

a) The issue does not happen in caracal+ because of commit [1] which refactors the code. I tried to reproduce in Caracal with 50 volumes, including grandparent volumes, and I couldn't. If we could backport this fix as far back as Yoga this would address the problem for our users.

b) A single line of code in [2] can address the problem in bobcat and older releases by adding a retry:

    @utils.retry(rbd.ImageNotFound, 2)
    def delete_volume(self, volume: Volume) -> None:

The retry basically causes the ImageNotFound exception thrown at [4] to retry the delete_volume function, which will then find the "<volume>.deleted" at [3], solving the race condition. It is simpler than adding something more complex directly at [4] where the error happens.

[1] https://github.com/openstack/cinder/commit/1a675c9aa178c6d9c6ed10fd98f086c46d350d3f

[2] https://github.com/openstack/cinder/blob/5b3717f8bfa69c142778ffeabfc4ab91f1f23581/cinder/volume/drivers/rbd.py#L1371

[3] https://github.com/openstack/cinder/blob/5b3717f8bfa69c142778ffeabfc4ab91f1f23581/cinder/volume/drivers/rbd.py#L1401

[4] https://github.com/openstack/cinder/blob/5b3717f8bfa69c142778ffeabfc4ab91f1f23581/cinder/volume/drivers/rbd.py#L1337

===================================================
SRU TEMPLATE
============

[Impact]

Due to a race condition, attempting to delete multiple volumes where among them there is a parent and a child can result in one or more volumes being stuck in error_deleting. The reason is because the childs get updated as the parent is deleted, and if the code had already started deleting the child then the reference changes halfway through and fails. Later, the volumes can still be deleted by resetting the state and retrying, but the user experience is cumbersome.

Upstream has fixed the issue in Caracal by refactoring the delete method with significant behavioural changes (see comment #2), and has backported the refactor to Antelope. Also the refactor code applies to Yoga, it is preferred to implement a simpler fix to address this specific problem in Yoga. The simpler fix is a retry decorator which will force the delete method to re-run, picking up the updated reference of the parent being deleted and therefore succeeding deleting the childs.

[Test case]

1) Deploy Cinder with Ceph
2) Create a parent volume

openstack volume create --size 1 v1

3) Create the child volumes

for i in {1..9}; do openstack volume create d$i --source v1 --size 1;done

4) Wait for all volumes to be created and available

5) Delete all the volumes

for item in $(openstack volume list --format value -c ID | sort | xargs); do openstack volume delete $item & done

6) Check for volumes stuck in error_deleting, if None, repeat steps 2-5

7) Confirm error message rbd.ImageNotFound in the logs

8) Install fixed package

9) Repeat steps 2-5, confirm no volumes stuck in error_deleting and the following messages in the log:

"no longer exists in backend"

"Retrying cinder.volume.drivers.rbd.RBDDriver.delete_volume.<locals>._delete_volume_with_retry ... as it raised ImageNotFound: [errno 2] RBD image not found (error opening image ... at snapshot None)"

If messages are not in the logs after the new delete command, then retry steps 2-5 until messages appear while still not having any volumes stuck in error_deleting (in other words, all volume deleted successfully).

[Regression Potential]

For Bobcat and Antelope, there is reasonable regression potential because of the complexity of refactor [1] (see comment #2), however, discussions on previous upstream meetings and upstream CI runs of Caracal, Bobcat and Antelope backports which test the refactor provide some level of reassurance. For Yoga, we consider no regression potential with the simpler retry decorator fix.

[Other Info]

None.

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

to backport [1] cleanly to yoga it is necessary to apply the following commits [2], [3] and [4], but [3] and [4] are not necessary because the methods are completely removed in [1]. Therefore only [2] is needed to avoid unit test conflicts.

[1] https://github.com/openstack/cinder/commit/1a675c9aa178c6d9c6ed10fd98f086c46d350d3f

[2] https://github.com/openstack/cinder/commit/e1138a126f80f9cf5a38cd49066133baba1a0fef

[3] https://github.com/openstack/cinder/commit/5179e4f6bf49795d2aa4c8d0807a502ee1561f60

[4] https://github.com/openstack/cinder/commit/b235048d6dc49fee0f3ad83d3216a42c23b69a20

Revision history for this message
Dan Hill (hillpd) wrote :

My main concern with backporting [1] is that it changes the volume delete behavior to use `trash_move()` as a fallback mechanism. Existing workflows and operators may not be aware that the trash namespace needs to be monitored for clean-up.

I'll also note that [1] changes delete behavior to perform additional flatten operations when necessary. This will make some delete operations much more costly in terms of performance. The implementation mitigates this risk by limiting the number of concurrent flatten operations, but it still introduces the potential for unexpected backend load.

After reviewing both options, I agree that the retry mechanism makes more sense as a backport candidate. Adding the retry decorator around `delete_volume` provides a straightforward solution to the reported race condition and avoids significant changes to the delete volume behavior that could potentially cause utilization and performance issues with existing deployments.

no longer affects: ubuntu
summary: - error deleting cloned volumes and parent at the same time when using
- ceph
+ [SRU] error deleting cloned volumes and parent at the same time when
+ using ceph
description: updated
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

For reference this is the error and stack trace:

parent: 2078a5c0-4272-46f3-b95b-d89d62da67af
child: 12d02019-17b1-45a3-9026-aed36873edaf

2024-11-20 15:43:43.583 38606 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/cinder/volume/manager.py", line 981, in delete_volume
2024-11-20 15:43:43.583 38606 ERROR oslo_messaging.rpc.server self.driver.delete_volume(volume)
2024-11-20 15:43:43.583 38606 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py", line 1350, in delete_volume
2024-11-20 15:43:43.583 38606 ERROR oslo_messaging.rpc.server self._delete_clone_parent_refs(client, parent, parent_snap)
2024-11-20 15:43:43.583 38606 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py", line 1231, in _delete_clone_parent_refs
2024-11-20 15:43:43.583 38606 ERROR oslo_messaging.rpc.server parent_rbd = self.rbd.Image(client.ioctx, parent_name)
2024-11-20 15:43:43.583 38606 ERROR oslo_messaging.rpc.server File "rbd.pyx", line 2896, in rbd.Image.__init__
2024-11-20 15:43:43.583 38606 ERROR oslo_messaging.rpc.server rbd.ImageNotFound: [errno 2] RBD image not found (error opening image b'volume-2078a5c0-4272-46f3-b95b-d89d62da67af' at snapshot None)
2024-11-20 15:43:43.583 38606 ERROR oslo_messaging.rpc.server
2024-11-20 15:43:43.593 38606 DEBUG cinder.volume.drivers.rbd [req-5c4144c1-c310-4f05-ba15-b05dda6d61af 70958fca143047a583e91795ff460152 5c20c2e1c8ed4948923449807a40b3e7 - - -] volume is a clone so cleaning references delete_volume /usr/lib/python3/dist-packages/cinder/volume/drivers/rbd.py:1348

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (unmaintained/yoga)

Fix proposed to branch: unmaintained/yoga
Review: https://review.opendev.org/c/openstack/cinder/+/935976

description: updated
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

forgot the DEP3 tags. Deleting debdiffs and re-creating

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.