Instance boot fails if Nova exceeds a quota in Cinder leaving created volumes in attached state
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Mirantis OpenStack | Status tracked in 10.0.x | |||||
10.0.x |
Confirmed
|
Medium
|
MOS Nova | |||
9.x |
Confirmed
|
Medium
|
MOS Nova |
Bug Description
Detailed bug description:
Before starting a boot process Nova does not check whether it has enough resources to finish booting the new instance. For example, when we have Cinder quota limitation and we start booting the new instance with several volumes Nova might overuse available number of Cinder volumes. After deleting of the failed instance the newly created volumes cannot be deleted properly. On the Ceph backend the newly created volume got stuck in the "attached" status, although the failed instance is already deleted. On the EMC VNX backend volumes cannot be deleted as well.
Steps to reproduce:
Set Cinder quotas so that you are allowed to create only one volume. Here we have already created 9 of 10 allowed volumes, so we can create only one more volume:
root@node-4:~# cinder quota-usage 58fbda8b4a9448f
+------
| Type | In_use | Reserved | Limit |
+------
| backup_gigabytes | 0 | 0 | 1000 |
| backups | 0 | 0 | 10 |
| gigabytes | 9 | 0 | 1000 |
| gigabytes_
| per_volume_
| snapshots | 0 | 0 | 10 |
| snapshots_
| volumes | 9 | 0 | 10 |
| volumes_
+------
Boot a new instance and simultaneously create two additional Cinder volumes:
root@node-4:~# nova boot --flavor m1.micro --block-device source=
I omit the output of the previous command since it is not very informative. More interesting is the output of these commands:
root@node-4:~# nova show b02062dd-
+------
| Property | Value |
+------
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-
| OS-EXT-STS:vm_state | error |
| OS-SRV-
| OS-SRV-
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2017-03-
| description | - |
| fault | {"message": "Build of instance b02062dd-
| | filter_properties) |
| | File \"/usr/
| | 'create.error', fault=e) |
| | File \"/usr/
| | self.force_
| | File \"/usr/
| | six.reraise(
| | File \"/usr/
| | block_device_
| | File \"/usr/
| | return self.gen.next() |
| | File \"/usr/
| | reason=
| | ", "created": "2017-03-
| flavor | m1.micro (723364db-
| hostId | |
| host_status | |
| id | b02062dd-
| image | Attempt to boot from volume - no image supplied |
| key_name | - |
| locked | False |
| metadata | {} |
| name | cirros-
| os-extended-
| status | ERROR |
| tenant_id | 58fbda8b4a9448f
| updated | 2017-03-
| user_id | 6df7e27730934dc
+------
root@node-4:~# cinder quota-usage 58fbda8b4a9448f
+------
| Type | In_use | Reserved | Limit |
+------
| backup_gigabytes | 0 | 0 | 1000 |
| backups | 0 | 0 | 10 |
| gigabytes | 10 | 0 | 1000 |
| gigabytes_
| per_volume_
| snapshots | 0 | 0 | 10 |
| snapshots_
| volumes | 10 | 0 | 10 |
| volumes_
+------
root@node-4:~# cinder list
+------
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+------
| 138096e2-
| 403aeeda-
| 817846aa-
| 81d0b0c2-
| 97933759-
| bf7b83ee-
| c16dddf0-
| d57b916a-
| dc14a001-
| f8022272-
+------
As you can see, the instance is in "error" statem although one of the volumes has been creates successfully and is marked as "attached". If I delete the problematic instance and try to delete the volume as well, I got an error message that the volume cannot be deleted since it is in an "attached" state.
root@node-4:~# nova delete b02062dd-
Request to delete server b02062dd-
root@node-4:~# cinder list
+------
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+------
| 138096e2-
| 403aeeda-
| 817846aa-
| 81d0b0c2-
| 97933759-
| bf7b83ee-
| c16dddf0-
| d57b916a-
| dc14a001-
| f8022272-
+------
root@node-4:~# cinder delete bf7b83ee-
Delete for volume bf7b83ee-
ERROR: Unable to delete any of the specified volumes.
The only way to delete this volume is to reset its state:
root@node-4:~# cinder reset-state --attach-status detached bf7b83ee-
root@node-4:~# cinder delete bf7b83ee-
Request to delete volume bf7b83ee-
root@node-4:~# cinder list
+------
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+------
| 138096e2-
| 403aeeda-
| 817846aa-
| 81d0b0c2-
| 97933759-
| c16dddf0-
| d57b916a-
| dc14a001-
| f8022272-
+------
Expected results:
I think Nova should not starting a new instance if it cannot obtain all available resources in order to finish the process of booting.
Workaround:
To remove problematic resources (volumes) manually
Description of the environment:
MOS 9.2
Changed in mos: | |
status: | New → Confirmed |
importance: | Undecided → Medium |
assignee: | nobody → MOS Nova (mos-nova) |
tags: | added: area-nova |
summary: |
- During the boot process Nova might overuse resources limited by quotas + Instance boot fails if Nova exceeds a quota in Cinder leaving created + volumes in attached state |