As per the above comment by Ram, this bug was supposed to take care of the K8s HA provisioning in cases where I intent to spawn contrail-kube-manager containers on multiple nodes:
```
instances:
server1:
ip: 10.0.0.4
provider: bms
roles: analytics: null analytics_database: null
config: null config_database: null
control: null kubemanager: null
webui: null
server2:
ip: 10.0.0.5
provider: bms
roles: analytics: null analytics_database: null
config: null config_database: null
control: null kubemanager: null
webui: null
server3:
ip: 10.0.0.6
provider: bms
roles: analytics: null analytics_database: null
config: null config_database: null
control: null k8s_master: null kubemanager: null
webui: null
```
Right now, I am using a workaround by mentioning "k8s_master" as well on all the servers like:
```
```
instances:
server1:
ip: 10.0.0.4
provider: bms
roles: analytics: null analytics_database: null
config: null config_database: null
control: null k8s_master: null kubemanager: null
webui: null
server2:
ip: 10.0.0.5
provider: bms
roles: analytics: null analytics_database: null
config: null config_database: null
control: null k8s_master: null kubemanager: null
webui: null
server3:
ip: 10.0.0.6
provider: bms
roles: analytics: null analytics_database: null
config: null config_database: null
control: null k8s_master: null kubemanager: null
webui: null
```
I am still facing the same issue if I don't use the workaround.
Please let me know if this bug takes care of above scenario or I need to raise a new one.
As per the above comment by Ram, this bug was supposed to take care of the K8s HA provisioning in cases where I intent to spawn contrail- kube-manager containers on multiple nodes:
analytics: null
analytics_ database: null
config_ database: null
kubemanager: null
analytics: null
analytics_ database: null
config_ database: null
kubemanager: null
analytics: null
analytics_ database: null
config_ database: null
k8s_ master: null
kubemanager: null
```
instances:
server1:
ip: 10.0.0.4
provider: bms
roles:
config: null
control: null
webui: null
server2:
ip: 10.0.0.5
provider: bms
roles:
config: null
control: null
webui: null
server3:
ip: 10.0.0.6
provider: bms
roles:
config: null
control: null
webui: null
```
Right now, I am using a workaround by mentioning "k8s_master" as well on all the servers like:
analytics: null
analytics_ database: null
config_ database: null
k8s_ master: null
kubemanager: null
analytics: null
analytics_ database: null
config_ database: null
k8s_ master: null
kubemanager: null
analytics: null
analytics_ database: null
config_ database: null
k8s_ master: null
kubemanager: null
```
```
instances:
server1:
ip: 10.0.0.4
provider: bms
roles:
config: null
control: null
webui: null
server2:
ip: 10.0.0.5
provider: bms
roles:
config: null
control: null
webui: null
server3:
ip: 10.0.0.6
provider: bms
roles:
config: null
control: null
webui: null
```
I am still facing the same issue if I don't use the workaround.
Please let me know if this bug takes care of above scenario or I need to raise a new one.