Volumes can't be managed (eg. attached) if the cinder-volume host which created them first becomes unavailable
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Confirmed
|
Undecided
|
Unassigned |
Bug Description
Steps to reproduce:
1. Configure a cinder node with the nfs driver (or any other shared storage)
2. Configure an additionl cinder node (running cinder-volume only) with same driver and point it at the same share
3. Create a number of new volume (6 or 8) so that the cinder-scheduler will distribute requests across both cinder-volume hosts, both will be creating volumes in the same share
4. Switch off cinder-volume on one of the cinder-volume hosts
At this point, despite all volumes being available to the remaining cinder-volume hosts, some volumes will not be usable. I suspect the scheduler is issueing the request to the host which is saved into the database and has firstly created the volumes.
Changed in cinder: | |
status: | Invalid → Confirmed |
Correct, this is by design at the moment. H/A (active/active or active/passive) volume manager has been talked about but not yet done.
There's a hack where you can force the host field of two volume-manager services to be the same and you sort of get H/A, but this is not a supported/ recommended configuration. Google 'ceph cinder H/A' to find details on the hack, if memory serves