On controller failover, VM objects in control node are inconsistent with that on contrail-api
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Juniper Openstack | Status tracked in Trunk | |||||
R4.0 |
Fix Committed
|
High
|
kamlesh parmar | |||
R4.1 |
Fix Committed
|
High
|
kamlesh parmar | |||
Trunk |
Fix Committed
|
High
|
kamlesh parmar |
Bug Description
R4.0.1.0 Build 36 Ubuntu 16.04.2 containers with fix for bug 1712003
This is a HA setup with 3 controllers nodec1/
In a k8s environment, a pod pod-6(VM ID 54cf30a5-
A new kube-manager was elected fine and the new pod's VM object showed up in contrail-api(old pod's VM object was deleted)
But in control node, the old VM ID continues to exist and the new VM ID is missing. Thus the pod is not acive.
gcore of control-node is taken. It will be in http://
root@nodec3(
Listing queues ...
contrail-
contrail-
contrail-dns.nodec2 0
contrail-dns.nodec3 0
root@nodec3(
root@nodec2(
Listing queues ...
device_
kube_manager.nodec2 0
nodec1:
nodec2:
nodec3:
schema_
svc_monitor.nodec3 0
vnc_config.
vnc_config.
root@nodec2(
Changed in juniperopenstack: | |
assignee: | Sachin Bansal (sbansal) → Pramodh D'Souza (psdsouza) |
tags: |
added: server-manager removed: contrail-control |
In order to figure out where the issue is I would need access to the setup/introspect command output/logs/traces