fuel allows you to remove all controllers and add new controllers in the same task
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Committed
|
High
|
Ihor Kalnytskyi |
Bug Description
{"build_id": "2014-09-
"ostf_sha": "4dcd99cc4bfa19
"build_number": "492",
"auth_required": true,
"api": "1.0",
"nailgun_sha": "d8d2d3ced87bc1
"production": "docker",
"fuelmain_sha": "d99f32aa95f1ff
"astute_sha": "78963a5415935f
"feature_groups": ["mirantis"],
"release": "5.1",
"fuellib_sha": "2cfa83119ae90b
Steps to reproduce:
1) Deploy cluster HA cluster
2) Create test instance
3) stage changes to Remove all controllers and replace them in the same task
4) deploy changes
This leads to the active controllers being removed and reset prior to the new controllers coming online, causing all service states to be lost (DB, rabbit,etc...)
expected result:
a)Test instance would still be listed in deployment
b1) Controllers would be blocked from being removed and added at the same time _OR_
b2) Controllers to remove would not be removed until deployment of new controllers was completed
For 5.1, b1 or b2 can be implemented, whichever is easier, however b2 is the preferred implementation.
description: | updated |
Changed in fuel: | |
assignee: | Fuel Python Team (fuel-python) → Fuel UI Team (fuel-ui) |
Changed in fuel: | |
assignee: | Fuel UI Team (fuel-ui) → Igor Kalnitsky (ikalnitsky) |
I think it is not critical, because it doesnot block anything at all.
If you need to delete controllers - just delete them, and then deploy new
Actually is it enough to simply redeploy controllers to reattach resource nodes to them?
There is no specific configuration that uses ips for controllers and not vip?