On controller failover, VM objects in control node are inconsistent with that on contrail-api

Bug #1714150 reported by Vedamurthy Joshi
20
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R4.0
Fix Committed
High
kamlesh parmar
R4.1
Fix Committed
High
kamlesh parmar
Trunk
Fix Committed
High
kamlesh parmar

Bug Description

R4.0.1.0 Build 36 Ubuntu 16.04.2 containers with fix for bug 1712003

This is a HA setup with 3 controllers nodec1/nodec2/nodec3

In a k8s environment, a pod pod-6(VM ID 54cf30a5-8df9-11e7-aeae-002590c30af2) was initially created. On nodec1, all contrail containers (i.e the active-kube-manager, controller, analytics, analyticsdb was stopped. And then the pod was immediately deleted and recreated again with VM UUID 5ff4aaf7-8df9-11e7-aeae-002590c30af2

A new kube-manager was elected fine and the new pod's VM object showed up in contrail-api(old pod's VM object was deleted)
But in control node, the old VM ID continues to exist and the new VM ID is missing. Thus the pod is not acive.
gcore of control-node is taken. It will be in http://10.204.216.50/Docs/bugs/#

root@nodec3(controller):/# rabbitmqctl list_queues
Listing queues ...
contrail-control.nodec2 0
contrail-control.nodec3 0
contrail-dns.nodec2 0
contrail-dns.nodec3 0
root@nodec3(controller):/#

root@nodec2(controller):/# rabbitmqctl list_queues
Listing queues ...
device_manager.nodec2 0
kube_manager.nodec2 0
nodec1:contrail-alarm-gen:0 33
nodec2:contrail-alarm-gen:0 0
nodec3:contrail-alarm-gen:0 0
schema_transformer.nodec2 0
svc_monitor.nodec3 0
vnc_config.nodec2-8082 0
vnc_config.nodec3-8082 0
root@nodec2(controller):/#

Sachin Bansal (sbansal)
Changed in juniperopenstack:
assignee: Sachin Bansal (sbansal) → Pramodh D'Souza (psdsouza)
Revision history for this message
Pramodh D'Souza (psdsouza) wrote :

In order to figure out where the issue is I would need access to the setup/introspect command output/logs/traces

Revision history for this message
Vedamurthy Joshi (vedujoshi) wrote :

Pramodh, i thought gcore would provide some insight w.r.t introspect/traces?

Revision history for this message
Pramodh D'Souza (psdsouza) wrote :

I took a look at the core and extracted the IfMapTraceBuf.

Observations:
- Its not the same issue as 1715075 since there is no read error in the trace
- The new object doesn't seem to be seen by the control node

We would need to reproduce the issue and take a look at the setup.
Would need to check api server logs to see if any rabbit messages are seen

Few Clarifications needed. I'm assuming you had a set of 3 api servers, 3 control nodes etc to begin with and ended up with 2.
if so, did you see the same issue on both the control nodes that remained?

( Trace attached)

Revision history for this message
Vedamurthy Joshi (vedujoshi) wrote :

Yes..they are 3 control nodes, 3 api-servers in the beginning.

I was able to repro the issue. Will send you the details by mail

Revision history for this message
Ignatious Johnson Christopher (ijohnson-x) wrote :

From: Pramodh D'Souza
Date: Wednesday, September 20, 2017 at 3:05 AM
To: Vedamurthy Ananth Joshi
Cc: Rudra Rugge , Sachchidanand Vaidya , Yuvaraja Mariappan , Chhandak Mukherjee
Subject: Re: Bug 1714150

Hi Vedu,

The issue is that there are two rabbit clusters. The rabbit configuration files seem to have been modified after starting the nodes.
You can try restarting to have them read the updated configuration.

Node 1
root@nodec1(controller):/# rabbitmqctl cluster_status
Cluster status of node rabbit@nodec1 ...
[{nodes,[{disc,[rabbit@nodec1,rabbit@nodec2]}]},
 {running_nodes,[rabbit@nodec2,rabbit@nodec1]},
 {cluster_name,<<"<email address hidden>">>},

Cluster status of node rabbit@nodec2 ...
[{nodes,[{disc,[rabbit@nodec1,rabbit@nodec2]}]},
 {running_nodes,[rabbit@nodec1,rabbit@nodec2]},
 {cluster_name,<<"<email address hidden>">>},
 {partitions,[]}]

root@nodec3(controller):/# rabbitmqctl cluster_status
Cluster status of node rabbit@nodec3 ...
[{nodes,[{disc,[rabbit@nodec3]}]},
 {running_nodes,[rabbit@nodec3]},
 {cluster_name,<<"<email address hidden>">>},
 {partitions,[]}]

The reason you see it on node2 and node3 is because they stopped receiving rabbit messages when they got split into a different rabbit cluster.

Regards,
Pramodh

Revision history for this message
Ignatious Johnson Christopher (ijohnson-x) wrote :
Download full text (3.3 KiB)

Issue:
=======

As per the rabbitmq is started in nodec2 and nodec3 at the same time, which caused two different clusters.

NODEC2 LOGS
===========
root@nodec2(controller):/# cat /var/log/rabbitmq/rabbit\@nodec2.log

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Starting RabbitMQ 3.5.7 on Erlang 18.3
Copyright (C) 2007-2015 Pivotal Software, Inc.
Licensed under the MPL. See http://www.rabbitmq.com/

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
node : rabbit@nodec2
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : tBHK3P2Wa419qrcxZV75DQ==
log : /<email address hidden>
sasl log : /<email address hidden>
database dir : /var/lib/rabbitmq/mnesia/rabbit@nodec2

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Memory limit set to 12859MB of 32147MB total.

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Disk free limit set to 50MB

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Limiting to approx 65436 file handles (58890 sockets)

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
FHC read buffering: ON
FHC write buffering: ON

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Database directory at /var/lib/rabbitmq/mnesia/rabbit@nodec2 is empty. Initialising from scratch...

=WARNING REPORT==== 19-Sep-2017::12:41:10 ===
Could not auto-cluster with rabbit@nodec1: {badrpc,nodedown}

=WARNING REPORT==== 19-Sep-2017::12:41:10 ===
Could not auto-cluster with rabbit@nodec3: {error,tables_not_present}

=WARNING REPORT==== 19-Sep-2017::12:41:10 ===
Could not find any node for auto-clustering from: [rabbit@nodec1,
                                                   rabbit@nodec2,
                                                   rabbit@nodec3]
Starting blank node...

NODEC3 LOGS
===========

root@nodec3(controller):/# cat /var/log/rabbitmq/rabbit\@nodec3.log

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Starting RabbitMQ 3.5.7 on Erlang 18.3
Copyright (C) 2007-2015 Pivotal Software, Inc.
Licensed under the MPL. See http://www.rabbitmq.com/

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
node : rabbit@nodec3
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : tBHK3P2Wa419qrcxZV75DQ==
log : /<email address hidden>
sasl log : /<email address hidden>
database dir : /var/lib/rabbitmq/mnesia/rabbit@nodec3

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Memory limit set to 12859MB of 32147MB total.

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Disk free limit set to 50MB

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Limiting to approx 65436 file handles (58890 sockets)

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
FHC read buffering: ON
FHC write buffering: ON

=INFO REPORT==== 19-Sep-2017::12:41:10 ===
Database directory at /var/lib/rabbitmq/mnesia/rabbit@nodec3 is empty. Initialising from scratch...

=WARNING REPORT==== 19-Sep-2017::12:41:10 ===
Could not auto-cluster with rabbit@nodec1: {badrpc,nodedown}

=WARNING REPORT==== 19-Sep-2017::12:41:10 ===
Could not auto-cluster with rabbit@nodec2: {error,tables_not_present}

=WARNING REPORT==== 19-Sep-2017::12:41:10 ===
Could not find any n...

Read more...

Revision history for this message
Ignatious Johnson Christopher (ijohnson-x) wrote :

Analysis:
==========

As per the code https://github.com/Juniper/contrail-ansible-internal/blob/master/playbooks/roles/rabbitmq/tasks/cluster.yml#L3 nodec2 and nodec3 should have waited for nodec1(leader node) to be started first.

However they are already started before this step at https://github.com/Juniper/contrail-ansible-internal/blob/a87da7191e8f010ef84a56e14273923a34c88761/playbooks/roles/rabbitmq/tasks/setup.yml#L12

Which is due to the reason that the service.yml is included during configuration phase as we call contrailctl with all three tags (configure, service and provision) in https://github.com/Juniper/contrail-ansible-internal/blob/master/playbooks/roles/contrail/config/files/systemd/contrail-ansible.service#L7

Revision history for this message
Ignatious Johnson Christopher (ijohnson-x) wrote :

Fix:
=====

Fix will b,

1. let the contailctl call the playbook with any tags like today
2. move service.yml include after cluster.yml(https://github.com/Juniper/contrail-ansible-internal/blob/a87da7191e8f010ef84a56e14273923a34c88761/playbooks/roles/rabbitmq/tasks/setup.yml#L18)
3. include service.yml in cluster.yml(https://github.com/Juniper/contrail-ansible-internal/blob/a87da7191e8f010ef84a56e14273923a34c88761/playbooks/roles/rabbitmq/tasks/cluster.yml#L1), when rabbitmq_cluster_node_role is 'leader'

Revision history for this message
Ignatious Johnson Christopher (ijohnson-x) wrote :
Download full text (6.1 KiB)

Patched the fix in this setup and verified,

NODEC1 CONTRAIL-ANSIBLE SERVICE LOGS:
======================================
Sep 21 00:04:54 nodec1 contrailctl[24799]: TASK [rabbitmq : Start rabbitmq-server in the leader node] *********************
Sep 21 00:04:54 nodec1 contrailctl[24799]: included: /contrail-ansible-internal/playbooks/roles/rabbitmq/tasks/service.yml for localhost
Sep 21 00:04:54 nodec1 contrailctl[24799]: TASK [rabbitmq : Retry starting rabbitmq-server in RedHat platforms] ***********
Sep 21 00:04:54 nodec1 contrailctl[24799]: skipping: [localhost]
Sep 21 00:04:54 nodec1 contrailctl[24799]: TASK [rabbitmq : Make sure rabbitmq-server service is in desired state] ********
Sep 21 00:04:59 nodec1 contrailctl[24799]: changed: [localhost]
Sep 21 00:04:59 nodec1 contrailctl[24799]: TASK [rabbitmq : Wait for master node to be up] ********************************
Sep 21 00:04:59 nodec1 contrailctl[24799]: skipping: [localhost]
Sep 21 00:04:59 nodec1 contrailctl[24799]: TASK [rabbitmq : Check rabbitmq cluster] ***************************************
Sep 21 00:04:59 nodec1 su[25542]: Successful su for rabbitmq by root
Sep 21 00:04:59 nodec1 su[25542]: + ??? root:rabbitmq
Sep 21 00:04:59 nodec1 su[25542]: pam_env(su:session): Unable to open env file: /etc/default/locale: No such file or directory
Sep 21 00:04:59 nodec1 su[25542]: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Sep 21 00:04:59 nodec1 su[25542]: pam_unix(su:session): session closed for user rabbitmq
Sep 21 00:04:59 nodec1 contrailctl[24799]: changed: [localhost]
Sep 21 00:04:59 nodec1 contrailctl[24799]: TASK [rabbitmq : join cluster] *************************************************
Sep 21 00:04:59 nodec1 contrailctl[24799]: skipping: [localhost]
Sep 21 00:04:59 nodec1 contrailctl[24799]: TASK [rabbitmq : include] ******************************************************
Sep 21 00:04:59 nodec1 contrailctl[24799]: included: /contrail-ansible-internal/playbooks/roles/rabbitmq/tasks/service.yml for localhost
Sep 21 00:05:00 nodec1 contrailctl[24799]: TASK [rabbitmq : Retry starting rabbitmq-server in RedHat platforms] ***********
Sep 21 00:05:00 nodec1 contrailctl[24799]: skipping: [localhost]
Sep 21 00:05:00 nodec1 contrailctl[24799]: TASK [rabbitmq : Make sure rabbitmq-server service is in desired state] ********
Sep 21 00:05:00 nodec1 contrailctl[24799]: ok: [localhost]

NODEC2 CONTRAIL-ANSIBLE SERVICE LOGS:
======================================
Sep 21 00:04:51 nodec2 contrailctl[25652]: TASK [rabbitmq : Start rabbitmq-server in the leader node] *********************
Sep 21 00:04:51 nodec2 contrailctl[25652]: skipping: [localhost]
Sep 21 00:04:51 nodec2 contrailctl[25652]: TASK [rabbitmq : Wait for master node to be up] ********************************
Sep 21 00:05:02 nodec2 contrailctl[25652]: ok: [localhost]
Sep 21 00:05:02 nodec2 contrailctl[25652]: TASK [rabbitmq : Check rabbitmq cluster] ***************************************
Sep 21 00:05:02 nodec2 su[26121]: Successful su for rabbitmq by root
Sep 21 00:05:02 nodec2 su[26121]: + ??? root:rabbitmq
Sep 21 00:05:02 nodec2 su[26121]: pam_env(su:session): Unable to open env file: /etc/default/locale: No ...

Read more...

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] master

Review in progress for https://review.opencontrail.org/35785
Submitter: Ignatious Johnson Christopher (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] R4.0

Review in progress for https://review.opencontrail.org/35786
Submitter: Ignatious Johnson Christopher (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : A change has been merged

Reviewed: https://review.opencontrail.org/35786
Committed: http://github.com/Juniper/contrail-ansible-internal/commit/255753483ced083dfb5a01672f9186a01d45e9e6
Submitter: Zuul (<email address hidden>)
Branch: R4.0

commit 255753483ced083dfb5a01672f9186a01d45e9e6
Author: Ignatious Johnson Christopher <email address hidden>
Date: Wed Sep 20 11:54:30 2017 -0700

Make sure the rabbitmq is started in the leader

node(first node) of the cluster before starting it in
the other node, to avoid forming of two different clusters
when started simultaneously.

Change-Id: Ic1db95f1e4522623cd1e5efa139914a1c58a8574
closes-Bug: 1714150

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote :

Reviewed: https://review.opencontrail.org/35785
Committed: http://github.com/Juniper/contrail-ansible-internal/commit/ccdc3a5fcc28947bdc21de51e015de2c5ca2e14f
Submitter: Zuul (<email address hidden>)
Branch: master

commit ccdc3a5fcc28947bdc21de51e015de2c5ca2e14f
Author: Ignatious Johnson Christopher <email address hidden>
Date: Wed Sep 20 11:54:30 2017 -0700

Make sure the rabbitmq is started in the leader

node(first node) of the cluster before starting it in
the other node, to avoid forming of two different clusters
when started simultaneously.

Change-Id: Ic1db95f1e4522623cd1e5efa139914a1c58a8574
closes-Bug: 1714150

Revision history for this message
Sandip Dey (sandipd) wrote :
Download full text (7.9 KiB)

Build:vcenter o16.04 4.0.1.0-52

The bug is not completely fixed.Wrong entry in /etc/hosts still causing the issue

Hi Sachin

Looks like its part of of preconfig. Same thing is done for ubuntu14.04 ,but this issue seems to not there.

Let me know if I need to raise a new bug or update the existing bug.

Regards
Sandip

oot@nodei27:/var/log/contrail# grep -r 'hosts' *
install_logs/install_2017_09_23__09_33_22.log:Enabling conf other-vhosts-access-log.
install_logs/install_2017_09_23__09_33_22.log:Updating /etc/hosts.allow, adding "sendmail: all".
install_logs/install_2017_09_23__09_33_22.log:Please edit /etc/hosts.allow and check the rules location to
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:40:11,055: preconfig.py:228:exec_cmd: INFO] [10.204.217.139]: grep puppet /etc/hosts | grep -v "^[ ]*#"
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:40:11,055: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.139]: grep puppet /etc/hosts | grep -v "^[ ]*#" && echo 001902803704605506407308209100
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:40:11,126: preconfig.py:228:exec_cmd: INFO] [10.204.217.139]: sed -i 's/10.204.217.158 puppet/10.204.217.139 puppet/g' /etc/hosts
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:40:11,127: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.139]: sed -i 's/10.204.217.158 puppet/10.204.217.139 puppet/g' /etc/hosts && echo 001902803704605506407308209100
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:40:11,221: preconfig.py:228:exec_cmd: INFO] [10.204.217.139]: grep nodei27 /etc/hosts | grep -v "^[ ]*#"
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:40:11,222: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.139]: grep nodei27 /etc/hosts | grep -v "^[ ]*#" && echo 001902803704605506407308209100
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:44:17,585: preconfig.py:228:exec_cmd: INFO] [10.204.217.144]: grep puppet /etc/hosts | grep -v "^[ ]*#"
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:44:17,586: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.144]: grep puppet /etc/hosts | grep -v "^[ ]*#" && echo 001902803704605506407308209100
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:44:17,658: preconfig.py:228:exec_cmd: INFO] [10.204.217.144]: sed -i 's/10.204.217.158 puppet/10.204.217.139 puppet/g' /etc/hosts
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:44:17,659: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.144]: sed -i 's/10.204.217.158 puppet/10.204.217.139 puppet/g' /etc/hosts && echo 001902803704605506407308209100
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:44:17,817: preconfig.py:228:exec_cmd: INFO] [10.204.217.144]: grep nodei32 /etc/hosts | grep -v "^[ ]*#"
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:44:17,817: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.144]: grep nodei32 /etc/hosts | grep -v "^[ ]*#" && echo 001902803704605506407308209100
sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23 09:50:26,379: preconfig.py:228:exec_cmd: INFO] [10.204.217.140]: grep puppet /etc/hosts |...

Read more...

Revision history for this message
Sachchidanand Vaidya (vaidyasd) wrote : Re: [Bug 1714150] Re: On controller failover, VM objects in control node are inconsistent with that on contrail-api
Download full text (11.2 KiB)

Pls open a new bug.

Thanks
Sachin

On 9/24/17, 8:13 PM, "<email address hidden> on behalf of Sandip Dey"
<<email address hidden> on behalf of <email address hidden>> wrote:

>Build:vcenter o16.04 4.0.1.0-52
>
>The bug is not completely fixed.Wrong entry in /etc/hosts still causing
>the issue
>
>Hi Sachin
>
>Looks like its part of of preconfig. Same thing is done for ubuntu14.04
>,but this issue seems to not there.
>
>Let me know if I need to raise a new bug or update the existing bug.
>
>Regards
>Sandip
>
>oot@nodei27:/var/log/contrail# grep -r 'hosts' *
>install_logs/install_2017_09_23__09_33_22.log:Enabling conf
>other-vhosts-access-log.
>install_logs/install_2017_09_23__09_33_22.log:Updating /etc/hosts.allow,
>adding "sendmail: all".
>install_logs/install_2017_09_23__09_33_22.log:Please edit
>/etc/hosts.allow and check the rules location to
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:40:11,055: preconfig.py:228:exec_cmd: INFO] [10.204.217.139]: grep
>puppet /etc/hosts | grep -v "^[ ]*#"
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:40:11,055: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.139]: grep
>puppet /etc/hosts | grep -v "^[ ]*#" && echo
>001902803704605506407308209100
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:40:11,126: preconfig.py:228:exec_cmd: INFO] [10.204.217.139]: sed -i
>'s/10.204.217.158 puppet/10.204.217.139 puppet/g' /etc/hosts
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:40:11,127: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.139]: sed -i
>'s/10.204.217.158 puppet/10.204.217.139 puppet/g' /etc/hosts && echo
>001902803704605506407308209100
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:40:11,221: preconfig.py:228:exec_cmd: INFO] [10.204.217.139]: grep
>nodei27 /etc/hosts | grep -v "^[ ]*#"
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:40:11,222: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.139]: grep
>nodei27 /etc/hosts | grep -v "^[ ]*#" && echo
>001902803704605506407308209100
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:44:17,585: preconfig.py:228:exec_cmd: INFO] [10.204.217.144]: grep
>puppet /etc/hosts | grep -v "^[ ]*#"
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:44:17,586: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.144]: grep
>puppet /etc/hosts | grep -v "^[ ]*#" && echo
>001902803704605506407308209100
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:44:17,658: preconfig.py:228:exec_cmd: INFO] [10.204.217.144]: sed -i
>'s/10.204.217.158 puppet/10.204.217.139 puppet/g' /etc/hosts
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:44:17,659: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.144]: sed -i
>'s/10.204.217.158 puppet/10.204.217.139 puppet/g' /etc/hosts && echo
>001902803704605506407308209100
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:44:17,817: preconfig.py:228:exec_cmd: INFO] [10.204.217.144]: grep
>nodei32 /etc/hosts | grep -v "^[ ]*#"
>sm_provisioning/2017_09_23__09_33_22/preconfig.log:[2017-09-23
>09:44:17,817: preconfig.py:229:exec_cmd: DEBUG] [10.204.217.144]: gr...

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] R4.0

Review in progress for https://review.opencontrail.org/35917
Submitter: kamlesh parmar (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] master

Review in progress for https://review.opencontrail.org/35919
Submitter: kamlesh parmar (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : [Review update] R4.1

Review in progress for https://review.opencontrail.org/35920
Submitter: kamlesh parmar (<email address hidden>)

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote : A change has been merged

Reviewed: https://review.opencontrail.org/35917
Committed: http://github.com/Juniper/contrail-server-manager/commit/caf91484657c6bc354a3fdb070aedc89ebcc0a63
Submitter: Zuul (<email address hidden>)
Branch: R4.0

commit caf91484657c6bc354a3fdb070aedc89ebcc0a63
Author: Kamlesh Parmar <email address hidden>
Date: Mon Sep 25 15:11:50 2017 -0700

For 1604 vcenter-only provisioning, do not add puppet entry in
/etc/hosts file.

Change-Id: Iab5d994f66302bc4744611f35fa5a94e7052030d
Closes-Bug: #1714150

Nischal Sheth (nsheth)
tags: added: server-manager
removed: contrail-control
Revision history for this message
OpenContrail Admin (ci-admin-f) wrote :

Reviewed: https://review.opencontrail.org/35920
Committed: http://github.com/Juniper/contrail-server-manager/commit/05b0f31b4e355b588518edc4457f97ae6475d2aa
Submitter: Zuul (<email address hidden>)
Branch: R4.1

commit 05b0f31b4e355b588518edc4457f97ae6475d2aa
Author: Kamlesh Parmar <email address hidden>
Date: Mon Sep 25 15:11:50 2017 -0700

For 1604 vcenter-only provisioning, do not add puppet entry in
/etc/hosts file.

Change-Id: Iab5d994f66302bc4744611f35fa5a94e7052030d
Closes-Bug: #1714150

Revision history for this message
OpenContrail Admin (ci-admin-f) wrote :

Reviewed: https://review.opencontrail.org/35919
Committed: http://github.com/Juniper/contrail-server-manager/commit/8e6944e9440cd336ea6edafb67ea2b5629d70387
Submitter: Zuul (<email address hidden>)
Branch: master

commit 8e6944e9440cd336ea6edafb67ea2b5629d70387
Author: Kamlesh Parmar <email address hidden>
Date: Mon Sep 25 15:11:50 2017 -0700

For 1604 vcenter-only provisioning, do not add puppet entry in
/etc/hosts file.

Change-Id: Iab5d994f66302bc4744611f35fa5a94e7052030d
Closes-Bug: #1714150

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.