Unable to have multiple cinder-volume services in same availability zone

Bug #1091481 reported by Sam Morrison
16
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Cinder
Invalid
Undecided
Unassigned

Bug Description

I have 2 cinder-volume services in the same availability zone. If I create a volume it gets populated with the host of the cinder-volume that did the request.

If this host is dead, or no longer in service you can no longer perform operation on this volume.

To fix this I had to manually edit the DB and change the host field for the volume.

Tags: ops
Revision history for this message
Huang Zhiteng (zhiteng-huang) wrote :

Hi Sam,

I'm not quite sure I understand your problem. Are you saying the a out-of-service cinder-volume isn't able to serve new request?

Changed in cinder:
status: New → Incomplete
Revision history for this message
Sam Morrison (sorrison) wrote :

I'll be a bit clearer:

I have 2 cinder volume servers cinder-1 and cinder-2 they both talk to the same backend (in our case NetApp)

I create a volume, it gets picked up by cinder-1. Everything works fine. In the DB the volume entry has cinder-1 down in the host field. Not sure why this is needed? Why does it matter what cinder-volume host handled the request?

I shut down cinder-1

Now if I try for example to delete the volume I can't do this.

Please let me know if you need more info.

Changed in cinder:
status: Incomplete → New
Revision history for this message
John Griffith (john-griffith) wrote :

Hi Sam,

Yeah, considering your setup it seems odd but that is how it's setup to work. Each cinder node has it's own backend, even if in reality it's the same device, the Cinder service doesn't *know* this.

Your expectation is reasonable though, and I think it would tie in with some other interesting HA type ideas that have been kicking around. Maybe we could get something in before Grizzly releases that would provide the sort of behavior you expected here.

As it stands right now though this isn't really a bug, perhaps a feature request...

Revision history for this message
Michael Still (mikal) wrote : Re: [Bug 1091481] Re: Unable to have multiple cinder-volume services in same availability zone

Sam -- what about setting up some sort of HA arrangement infront of
the cinder servers?

Revision history for this message
Sam Morrison (sorrison) wrote :

OK I will get a blue print up next year.

I think the way it works now only really makes sense for people using LVM on the local cinder-volume host is that correct?

So we could do some switch on the use_local_volume flag for this possible.
Anyway I'll look into this more next year.

Michael: I don't think it needs another layer. With some slight code modifications we should be able to make it HA in certain setups/drivers.

Revision history for this message
Huang Zhiteng (zhiteng-huang) wrote :

The host field for volume table is to indicate which cinder volume service serves the volume related request. It is needed since cinder volume (a.k.a) in most case is the only interface for cinder api to talk to volume back-ends.

So from you description, I think the behavior is expected. It's not a bug.

Why are you using two cinder volume servers with same back-end? Is it for HA purpose? Then I guess you should somehow make these two volume servers transparent (e.g. make one primary and the other secondary, use thigns like dynamic DNS to switch between them) to cinder scheduler.

Changed in cinder:
status: New → Invalid
Revision history for this message
Vish Ishaya (vishvananda) wrote :

Another option is to give both cinder servers the same name for host:

<in both cinder.confs>
host=netappproxy

I think this will allow direct messages to be picked up by either worker. Note that this will only work if both servers are talking to the same backend. I would NOT try this using the normal iscsi backend. There may also be subtle issues with this so I suggest you test it first.

Revision history for this message
Michael Still (mikal) wrote :

This bug looks to be a duplicate of 1028718 to me.

tags: added: ops
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.