volume deatching / deleting error after controller shutdown

Bug #1267460 reported by mauro
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Incomplete
Undecided
Miroslav Anashkin

Bug Description

Fuel deployment 4.0
scenario : 3 controllers ( node-4 node-5 node-6 , ceph )+ 3 compute nodes (node-1 node-2 node-3)

added tenant "mauro" and several VMs an Volumes created ( and attached to VMs).
Basic features ( dhcp , volume attach/detach create/delete, instance attach/detach, router ) work well.

In order to check cluster behaviour ----> Node-6 manually powered off ( simulated crash ).

The expectation is that openstack functionality is not affected by a single controller crash as 2 controllers remains active but:

The existent Volumes cannot be detached and or deleted either via CLI and Horizon dashboard

executed cinder force-delete <volume-id> doesn t help

manually restarted openstack-cinder-* processes on the remanining nodes doesn t help.

snapshot in attachment.

Revision history for this message
mauro (maurof) wrote :

cinder-log in attachment

snapshot available.

Revision history for this message
mauro (maurof) wrote :

[root@node-5 log]# ceph -s
  cluster 3e672054-8c1d-4b60-bb93-bf4d03d2d82f
   health HEALTH_WARN 1 mons down, quorum 0,1 node-4,node-5
   monmap e3: 3 mons at {node-4=192.168.121.6:6789/0,node-5=192.168.121.7:6789/0,node-6=192.168.121.8:6789/0}, election epoch 34, quorum 0,1 node-4,node-5
   osdmap e157: 18 osds: 12 up, 12 in
    pgmap v49284: 492 pgs: 492 active+clean; 17744 MB data, 59848 MB used, 29721 GB / 29780 GB avail; 26535B/s wr, 2op/s
   mdsmap e1: 0/0/1 up

Binary Host Zone Status State Updated_At
nova-cert node-4.prisma internal enabled :-) 2014-01-09 14:13:07
nova-consoleauth node-4.prisma internal enabled :-) 2014-01-09 14:13:07
nova-scheduler node-4.prisma internal enabled :-) 2014-01-09 14:13:01
nova-conductor node-4.prisma internal enabled :-) 2014-01-09 14:13:07
nova-cert node-5.prisma internal enabled :-) 2014-01-09 14:13:03
nova-consoleauth node-5.prisma internal enabled :-) 2014-01-09 14:13:03
nova-scheduler node-5.prisma internal enabled :-) 2014-01-09 14:13:03
nova-conductor node-5.prisma internal enabled :-) 2014-01-09 14:13:07
nova-cert node-6.prisma internal enabled XXX 2014-01-09 10:20:59
nova-consoleauth node-6.prisma internal enabled XXX 2014-01-09 10:20:56
nova-scheduler node-6.prisma internal enabled XXX 2014-01-09 10:20:58
nova-conductor node-6.prisma internal enabled XXX 2014-01-09 10:20:55
nova-compute node-2.prisma nova enabled :-) 2014-01-09 14:13:06
nova-compute node-3.prisma nova enabled :-) 2014-01-09 14:13:06
nova-compute node-1.prisma nova enabled :-) 2014-01-09 14:13:06

Revision history for this message
Vladimir Kuklin (vkuklin) wrote :

We need full diagnostic snapshot of the cluster. Please post it here.

Changed in fuel:
status: New → Incomplete
Revision history for this message
Mike Scherbakov (mihgen) wrote :

Miroslav, can you try to reproduce it please?

Changed in fuel:
assignee: nobody → Miroslav Anashkin (manashkin)
milestone: none → 4.1
Revision history for this message
Miroslav Anashkin (manashkin) wrote : Re: [Bug 1267460] Re: volume deatching / deleting error after controller shutdown

A bit later.

My workstation currently is loaded with Expedia issues.

Kind regards,
Miroslav

On Fri, Jan 31, 2014 at 11:43 AM, Mike Scherbakov <
<email address hidden>> wrote:

> Miroslav, can you try to reproduce it please?
>
> ** Changed in: fuel
> Assignee: (unassigned) => Miroslav Anashkin (manashkin)
>
> ** Changed in: fuel
> Milestone: None => 4.1
>
> --
> You received this bug notification because you are a bug assignee.
> https://bugs.launchpad.net/bugs/1267460
>
> Title:
> volume deatching / deleting error after controller shutdown
>
> Status in Fuel: OpenStack installer that works:
> Incomplete
>
> Bug description:
> Fuel deployment 4.0
> scenario : 3 controllers ( node-4 node-5 node-6 , ceph )+ 3 compute
> nodes (node-1 node-2 node-3)
>
> added tenant "mauro" and several VMs an Volumes created ( and attached
> to VMs).
> Basic features ( dhcp , volume attach/detach create/delete, instance
> attach/detach, router ) work well.
>
> In order to check cluster behaviour ----> Node-6 manually powered
> off ( simulated crash ).
>
> The expectation is that openstack functionality is not affected by a
> single controller crash as 2 controllers remains active but:
>
>
> The existent Volumes cannot be detached and or deleted either via CLI
> and Horizon dashboard
>
> executed cinder force-delete <volume-id> doesn t help
>
> manually restarted openstack-cinder-* processes on the
> remanining nodes doesn t help.
>
> snapshot in attachment.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/fuel/+bug/1267460/+subscriptions
>

--

*Kind Regards*

*Miroslav Anashkin**L2 support engineer**,*
*Mirantis Inc.*
*+7(495)640-4944 (office receptionist)*
*+1(650)587-5200 (office receptionist, call from US)*
*35b, Bld. 3, Vorontsovskaya St.*
*Moscow**, Russia, 109147.*

www.mirantis.com

<email address hidden>

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.