After deletion of 2 controllers from HA-cluster /etc/hosts wasn't cleaned up
Bug #1513401 reported by
Vladimir Khlyunev
This bug affects 4 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Committed
|
High
|
Matthew Mosesohn | ||
8.0.x |
Won't Fix
|
High
|
Fuel Python (Deprecated) | ||
Mitaka |
Fix Released
|
High
|
Matthew Mosesohn |
Bug Description
I got this bug on iso 7.0-301 + MU1, but clear 7.0-301 can be affected too.
Steps to reproduce:
1) Deploy cluster: 3 controllers + 1 compute, Neutron VLAN
2) Remove 2 controllers, re-deploy
3) Check hosts /etc/hosts on all nodes
4) Run OSTF
Result:
/etc/hosts on remaining nodes wasn't changed
also nodes became unavailable by dns name
I will keep my env - feel free to request it
summary: |
- After deletion of 2 nodes from HA-cluster several errors occures (OSTF - test failure, /etc/hosts wasn't cleaned up, nova-service doesn't + After deletion of 2 controllers from HA-cluster several errors occures + (OSTF test failure, /etc/hosts wasn't cleaned up, nova-service doesn't migrated) |
summary: |
- After deletion of 2 controllers from HA-cluster several errors occures - (OSTF test failure, /etc/hosts wasn't cleaned up, nova-service doesn't - migrated) + After deletion of 2 controllers from HA-cluster s/etc/hosts wasn't + cleaned up |
description: | updated |
tags: | added: life-cycle-management |
no longer affects: | fuel/8.0.x |
Changed in fuel: | |
status: | New → Confirmed |
tags: | added: 70-mu1-new-bug |
Changed in fuel: | |
assignee: | Fuel Library Team (fuel-library) → Michael Polenchuk (mpolenchuk) |
Changed in fuel: | |
status: | Confirmed → In Progress |
Changed in fuel: | |
assignee: | Michael Polenchuk (mpolenchuk) → Bogdan Dobrelya (bogdando) |
Changed in fuel: | |
status: | New → Confirmed |
Changed in fuel: | |
assignee: | Fuel Python Team (fuel-python) → Michael Polenchuk (mpolenchuk) |
status: | Confirmed → In Progress |
Changed in fuel: | |
assignee: | Michael Polenchuk (mpolenchuk) → Fuel Python Team (fuel-python) |
Changed in fuel: | |
status: | In Progress → Confirmed |
Changed in fuel: | |
assignee: | Fuel Python Team (fuel-python) → Michael Polenchuk (mpolenchuk) |
status: | Confirmed → In Progress |
Changed in fuel: | |
assignee: | Michael Polenchuk (mpolenchuk) → Fuel Python Team (fuel-python) |
status: | In Progress → Confirmed |
no longer affects: | fuel/mitaka |
summary: |
- After deletion of 2 controllers from HA-cluster s/etc/hosts wasn't + After deletion of 2 controllers from HA-cluster /etc/hosts wasn't cleaned up |
tags: | added: release-notes |
tags: |
added: 8.0 release-notes-done removed: release-notes |
tags: | added: keep-in-9.0 |
Changed in fuel: | |
importance: | Medium → High |
Changed in fuel: | |
milestone: | 9.0 → 10.0 |
tags: | added: on-verification |
To post a comment you must log in.
nodes that are being deleted are not included in astute.yaml, so the hosts.pp granular module cannot delete these entries. It can only add entries for those nodes that are present in astute.yaml.
We could try to introduce an array of hosts that are being deleted so we can perform whatever cleanup tasks are necessary. I believe it needs an entire "nodes hash" for each one, just so we can identify its roles, ips, network roles, etc, and ensure all proper migrations can take place.