test_create_server_from_volume_snapshot (intermittently?) fails in devstack-plugin-ceph job - Delete snapshot failed, due to snapshot busy.: SnapshotIsBusy: deleting snapshot snapshot-cf654962-f53a-43e5-87eb-4c73bfd70804 that has dependent volumes
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
New
|
Undecided
|
Unassigned |
Bug Description
Seen here:
traceback-1: {{{
Traceback (most recent call last):
File "tempest/
return func(*args, **kwargs)
File "tempest/
resp, body = self.delete(url)
File "tempest/
return self.request(
File "tempest/
method, url, extra_headers, headers, body, chunked)
File "tempest/
self.
File "tempest/
raise exceptions.
tempest.
Details: {u'message': u'Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer.', u'code': 400}
}}}
traceback-2: {{{
Traceback (most recent call last):
File "tempest/
raise exceptions.
tempest.
Details: (TestVolumeBoot
}}}
Traceback (most recent call last):
File "tempest/
raise exceptions.
tempest.
Details: (TestVolumeBoot
In the c-vol logs:
Jul 05 20:36:03.843973 ubuntu-
11 hits in 7 days, all failures. I'm not sure if this is a new regression on master or not, I don't see anything obvious related to rbd and snapshots in cinder/
tags: | added: ceph gate-failure rbd |