wrongly disabled used nodes
Bug #1710535 reported by
suzhengwei
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
watcher |
Triaged
|
Medium
|
Unassigned |
Bug Description
When testing server consolidation, I found that it occasionally disabled still used nodes. The reason is following.
If new instances is created during the time between audit and actionpaln executing, it has high probability to schedule the new instances into the unused node. But watcher has still disabled the node which still has the new running instances. This is what the strategies not expected .
description: | updated |
Changed in watcher: | |
milestone: | none → queens-1 |
To post a comment you must log in.
Watcher team have discussed it and we have found the following solution:
1. Advice to use auto-trigger option for consolidation strategy.
2. As of starting Action Plan we check for new instances.
3. Watcher consume notifications from Nova during Action Plan execution.
4. If second and third steps are true, Watcher send a notification about changing of cluster state.