Server in status "Shutoff" and power status "Shutdown" after blade reboot

Bug #1384198 reported by Willie Nelligan
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Fix Released
Medium
Alexander Gubanov
5.0.x
Won't Fix
Medium
MOS Nova
5.1.x
Won't Fix
Medium
MOS Nova
6.0.x
Won't Fix
Medium
Alexander Gubanov
6.1.x
Fix Released
Medium
Alexander Gubanov

Bug Description

A VM is created on node-7 with nova boot command.
root@node-2:/var/log# nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------+
| cc70be0e-fc5e-4f4f-b260-01a055dff09c | MOS51 | ACTIVE | - | Running | private=10.0.10.2 |

VM is in correct state.

Login to node-7 and execute a "reboot" of the blade.
The blade recovers and new VMs can be created, but the existing VM at the time of reboot goes into status Shutoff and power state Shutdown.
The VM does not recover on its own.
root@node-2:/var/log# nova list
+--------------------------------------+---------+---------+------------+-------------+-------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+---------+------------+-------------+-------------------+
| cc70be0e-fc5e-4f4f-b260-01a055dff09c | MOS51 | SHUTOFF | - | Shutdown | private=10.0.10.2 |

Workaround:
Execute a hard reboot of the VM and it recovers to the correct status and power state.
nova reboot --hard cc70be0e-fc5e-4f4f-b260-01a055dff09c

Configuration data :
HP C7000 node with MOS5.1 installed.

[root@fuel ~]# fuel node list
id | status | name | cluster | ip | mac | roles | pending_roles | online
---|--------|------------------|---------|---------------|-------------------|---------------------------|---------------|-------
6 | ready | Untitled (00:18) | 1 | 192.168.3.105 | 72:f0:a1:89:ae:4b | compute | | True
1 | ready | Untitled (00:04) | 1 | 192.168.3.100 | 4e:90:fb:ee:ef:4d | cinder, controller, mongo | | True
5 | ready | Untitled (00:14) | 1 | 192.168.3.104 | b2:1a:8f:ad:d6:4f | compute | | True
2 | ready | Untitled (00:08) | 1 | 192.168.3.101 | 72:f5:c6:c7:12:48 | cinder, controller, mongo | | True
4 | ready | Untitled (00:10) | 1 | 192.168.3.103 | 2a:1b:0b:7c:6f:45 | compute | | True
3 | ready | Untitled (00:0c) | 1 | 192.168.3.102 | 22:63:ec:e1:b0:40 | cinder, controller, mongo | | True
7 | ready | Untitled (00:1c) | 1 | 192.168.3.106 | ae:59:66:7b:31:48 | compute | | True
[root@fuel ~]#

Tags: nova
Revision history for this message
Willie Nelligan (willie-nelligan) wrote :
Revision history for this message
Willie Nelligan (willie-nelligan) wrote :

resume_guests_state_on_host_boot=True parameter in /etc/nova/nova.conf on your host(s) and restart nova services: This fixes the issue.

Changed in mos:
status: New → Triaged
tags: added: nova
Revision history for this message
Alexander Gubanov (ogubanov) wrote :

resume_guests_state_on_host_boot=True seems to be a better default value

Changed in mos:
importance: Undecided → Medium
milestone: none → 6.0
assignee: nobody → MOS Nova (mos-nova)
Revision history for this message
Dmitry Borodaenko (angdraug) wrote :

Not high enough priority to backport to maintenance releases.

Changed in mos:
status: Triaged → Won't Fix
Changed in mos:
milestone: 6.0 → 6.0.1
Revision history for this message
Roman Podoliaka (rpodolyaka) wrote :
no longer affects: mos/7.0.x
Revision history for this message
Alexander Gubanov (ogubanov) wrote :

I have verified this option on mos 6.1 - already merged

root@node-5-new:~# grep 'resume' /etc/nova/nova.conf
resume_guests_state_on_host_boot=True

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.