R5.0.1-micro-services provision - haproxy fails to come up.

Bug #1773425 reported by Ritam Gangopadhyay
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R5.0
Invalid
Critical
Ramprakash R
Trunk
Invalid
Critical
Ramprakash R

Bug Description

Setup:-
nodem14:/tmp/ansible.D1tXjq_contrail/contrail-ansible-deployer/config/instances.yaml

Provisioning fails with error:-
TASK [haproxy : Waiting for virtual IP to appear] **************************************************************************************************************************************************
fatal: [10.204.216.103]: FAILED! => {"changed": false, "elapsed": 301, "msg": "Timeout when waiting for 10.10.10.20:3306"}
fatal: [10.204.216.96]: FAILED! => {"changed": false, "elapsed": 301, "msg": "Timeout when waiting for 10.10.10.20:3306"}
fatal: [10.204.216.95]: FAILED! => {"changed": false, "elapsed": 301, "msg": "Timeout when waiting for 10.10.10.20:3306"}
        to retry, use: --limit @/tmp/ansible.D1tXjq_contrail/contrail-ansible-deployer/playbooks/install_contrail.retry

PLAY RECAP *****************************************************************************************************************************************************************************************
10.204.216.103 : ok=100 changed=29 unreachable=0 failed=1
10.204.216.95 : ok=99 changed=28 unreachable=0 failed=1
10.204.216.96 : ok=99 changed=28 unreachable=0 failed=1
10.204.216.97 : ok=69 changed=10 unreachable=0 failed=0
10.204.216.98 : ok=69 changed=10 unreachable=0 failed=0
10.204.216.99 : ok=76 changed=13 unreachable=0 failed=0
localhost : ok=7 changed=2 unreachable=0 failed=0

Revision history for this message
Andrey Pavlov (apavlov-e) wrote :

Ritam: When we configure openstack on the mgmt/API interface of a setup as below in instances.yaml:-

contrail_configuration:
 CONTROLLER_NODES: 10.204.216.103,10.204.216.95,10.204.216.96
 KEYSTONE_AUTH_HOST: 10.204.216.140
 CONTROL_NODES: 10.10.10.14,10.10.10.6,10.10.10.7
 OPENSTACK_NODES: 10.204.216.103,10.204.216.95,10.204.216.96

kolla_config:
 kolla_globals:
   kolla_internal_vip_address: 10.10.10.20
   kolla_external_vip_address: 10.204.216.140
   contrail_api_interface_address: 10.204.216.103

Revision history for this message
Andrey Pavlov (apavlov-e) wrote :

2018-05-28 14:11:03,052 p=10623 u=root | TASK [create_openstack_config : debug] *****************************************
2018-05-28 14:11:03,173 p=10623 u=root | ok: [10.204.216.99] => {
    "msg": "DEBUG network_interface eno1"
}
2018-05-28 14:11:03,193 p=10623 u=root | ok: [10.204.216.103] => {
    "msg": "DEBUG network_interface eno1"
}
2018-05-28 14:11:03,218 p=10623 u=root | ok: [10.204.216.96] => {
    "msg": "DEBUG network_interface eno1"
}
2018-05-28 14:11:03,250 p=10623 u=root | ok: [10.204.216.98] => {
    "msg": "DEBUG network_interface eno1"
}
2018-05-28 14:11:03,251 p=10623 u=root | ok: [10.204.216.95] => {
    "msg": "DEBUG network_interface eno1"
}
2018-05-28 14:11:03,292 p=10623 u=root | ok: [10.204.216.97] => {
    "msg": "DEBUG network_interface eno1"
}
2018-05-28 14:11:03,313 p=10623 u=root | TASK [create_openstack_config : debug] *****************************************
2018-05-28 14:11:03,397 p=10623 u=root | ok: [10.204.216.99] => {
    "msg": "DEBUG kolla_external_vip_interface eno1"
}
2018-05-28 14:11:03,451 p=10623 u=root | ok: [10.204.216.103] => {
    "msg": "DEBUG kolla_external_vip_interface eno1"
}
2018-05-28 14:11:03,471 p=10623 u=root | ok: [10.204.216.96] => {
    "msg": "DEBUG kolla_external_vip_interface eno1"
}
2018-05-28 14:11:03,506 p=10623 u=root | ok: [10.204.216.95] => {
    "msg": "DEBUG kolla_external_vip_interface eno1"
}
2018-05-28 14:11:03,510 p=10623 u=root | ok: [10.204.216.98] => {
    "msg": "DEBUG kolla_external_vip_interface eno1"
}
2018-05-28 14:11:03,527 p=10623 u=root | ok: [10.204.216.97] => {
    "msg": "DEBUG kolla_external_vip_interface eno1"
}
2018-05-28 14:11:03,547 p=10623 u=root | TASK [create_openstack_config : debug] *****************************************
    "host_intf_dict": {
        "10.204.216.103": "eno1",
        "10.204.216.95": "eno1",
        "10.204.216.96": "eno1",
        "10.204.216.97": "eno1",
        "10.204.216.98": "eno1",
        "10.204.216.99": "eno1"
    }

Revision history for this message
Ramprakash R (ramprakash) wrote :

The OPENSTACK_NODES should be the set of IP addresses where the opnestack services are desired. So this should have been:

OPENSTACK_NODES: 10.10.10.X,10.10.10.Y,10.10.10.Z

Please let me know if this works.

Revision history for this message
Ritam Gangopadhyay (ritam) wrote :
Download full text (7.7 KiB)

From: Ritam Gangopadhyay
Sent: Thursday, May 31, 2018 12:09 AM
To: Ramprakash Ram Mohan <email address hidden>
Cc: Michael Henkel <email address hidden>; Andrey Pavlov <email address hidden>; Sudheendra Rao <email address hidden>; Vimal Appachan <email address hidden>; Abhay Joshi <email address hidden>
Subject: Re: Multi Interface Setup with Openstack HA.

Hi Ram,

       The configuration mentioned in your email is for which the bug was filled.

Open stack nodes on mgmt/api n/w
Internal VIP on ctl-data n/w
External VIP on mgmt/api n/w

With this we see haproxy failing to come up while waiting for vip:3306

In the bug you asked me to configure open stack nodes on 10.10.10.X i.e. the ctl-data n/w. And with this I see the problem illustrated in my first mail.

Apart from these configurations we have also tried setting network interface, omitting open stack nodes in the instances.yaml and few others, without any success.

Regards,
Ritam.

On May 30, 2018 23:11, Ramprakash Ram Mohan <email address hidden> wrote:
+Abhay

Hi Ritam,

In your setup, if you want openstack to run its services on the mgmt. network, then you need to change the following in your config like I’ve mentioned in the bug:

contrail_configuration:
    OPENSTACK_NODES: 10.204.216.103,10.204.216.95,10.204.216.96
kolla_config:
   kolla_internal_vip_address: 10.204.216.140
   kolla_external_vip_address: 10.10.10.20

Basic theory is this:
Openstack runs services on the “kolla_internal_vip_address” which should correspond to an IP address on the “network_interface” parameter.
kolla_external_vip_address is the address you want access from. In “kolla” terminology, this should be your management network, but in contrail, we want to run the services themselves on the management network hence the swapped configs. (If you do not want access to openstack services on the 10.10.10.20 via haproxy, then you can avoid that altogether - I have not tested this though)
“network_interface” can be derived from the OPENSTACK_NODES parameter to avoid having to specify network names, so make sure kolla_internal_vip_address and OPENSTACK_NODES are on the same network.

Let me know if this helps.

Thanks,
Ram

From: Ritam Gangopadhyay <email address hidden>
Date: Wednesday, May 30, 2018 at 9:55 AM
To: Michael Henkel <email address hidden>, Ramprakash Ram Mohan <email address hidden>, Andrey Pavlov <email address hidden>
Cc: Sudheendra Rao <email address hidden>, Vimal Appachan <email address hidden>
Subject: Multi Interface Setup with Openstack HA.

instances.yaml_with_openstack_on_mgmt-api_subnet.yaml instances.yaml_with_opnstack_nodes_set_to_ctl_data_subnet.yaml

Hi,

       We are trying to get clarity on how services(contrail and openstack) are supposed to come up on a multi interface openstack HA setup and what would be the right deployment scenario.

       Ideally we should have mgmt, API and CTL/DATA as three separate networks on contrail, but as of now we have mgmt/API in one subnet and CTL/DATA in the other subnet. With that in mind, as of today what we understand from a multi interface setup is:-

1. Openstack services should open up north bound conne...

Read more...

Jeba Paulaiyan (jebap)
tags: added: beta-blocker
tags: added: fabric
summary: - R5.0.1-micro-services provision - MariaDB fails to come up.
+ R5.0.1-micro-services provision - haproxy fails to come up.
Revision history for this message
Ritam Gangopadhyay (ritam) wrote :

instances.yaml used:-

global_configuration:
   REGISTRY_PRIVATE_INSECURE: True
   CONTAINER_REGISTRY: 10.204.217.152:5000
provider_config:
  bms:
    domainsuffix: englab.juniper.net
    ntpserver: 10.204.217.158
    ssh_pwd: c0ntrail123
    ssh_user: root

instances:
  nodem10:
      ip: 10.204.216.99
      provider: bms
      roles:
          openstack_compute: null
          vrouter:
              PHYSICAL_INTERFACE: ens2f1
  nodem14:
      ip: 10.204.216.103
      provider: bms
      roles:
          analytics: null
          analytics_database: null
          config: null
          config_database: null
          control: null
          openstack: null
          webui: null
  nodem6:
      ip: 10.204.216.95
      provider: bms
      roles:
          analytics: null
          analytics_database: null
          config: null
          config_database: null
          control: null
          openstack: null
          webui: null
  nodem7:
      ip: 10.204.216.96
      provider: bms
      roles:
          analytics: null
          analytics_database: null
          config: null
          config_database: null
          control: null
          openstack: null
          webui: null
  nodem8:
      ip: 10.204.216.97
      provider: bms
      roles:
          openstack_compute: null
          vrouter:
              PHYSICAL_INTERFACE: bond0
  nodem9:
      ip: 10.204.216.98
      provider: bms
      roles:
          openstack_compute: null
          vrouter:
              PHYSICAL_INTERFACE: ens2f1

contrail_configuration:
  CLOUD_ORCHESTRATOR: openstack
  OPENSTACK_NODES: 10.204.216.103,10.204.216.95,10.204.216.96
  CONTROLLER_NODES: 110.204.216.103,10.204.216.95,10.204.216.96
  CONTROL_NODES: 10.10.10.14,10.10.10.6,10.10.10.7
  CONTAINER_REGISTRY: 10.204.217.152:5000
  REGISTRY_PRIVATE_INSECURE: True
  CONTRAIL_VERSION: ocata-master-117
  RABBITMQ_NODE_PORT: 5673
  AUTH_MODE: keystone
  KEYSTONE_AUTH_HOST: 10.204.216.140
  KEYSTONE_AUTH_URL_VERSION: /v3
  KEYSTONE_AUTH_ADMIN_PASSWORD: c0ntrail123
  AAA_MODE: rbac
  VROUTER_GATEWAY: 10.10.10.101
  ENCAP_PRIORITY: VXLAN,MPLSoUDP,MPLSoGRE
  JVM_EXTRA_OPTS: -Xms1g -Xmx2g
  IPFABRIC_SERVICE_HOST: 10.204.216.140

kolla_config:
  kolla_globals:
    kolla_internal_vip_address: 10.204.216.140
    kolla_external_vip_address: 10.10.10.20
    contrail_api_interface_address: 10.204.216.103
    docker_registry: docker.io
    docker_namespace: kolla
    enable_haproxy: "yes"
    enable_ironic: "no"
    enable_swift: "no"
  kolla_passwords:
    metadata_secret: c0ntrail123
    keystone_admin_password: c0ntrail123

Revision history for this message
Andrey Pavlov (apavlov-e) wrote :

I see incorrect behavior with interface derivation for kolla:

[root@nodem14 ~]# cat contrail-kolla-ansible/etc/kolla/globals.yml
---
# You can use this file to override _any_ variable throughout Kolla.
# Additional options can be found in the
# 'kolla-ansible/ansible/group_vars/all.yml' file.

neutron_opencontrail_init_image_full: 10.204.217.152:5000/contrail-openstack-neutron-init:ocata-master-117
openstack_release: ocata
ironic_notification_manager_image_full: 10.204.217.152:5000/contrail-openstack-ironic-notification-manager:ocata-master-117
enable_swift: no
storage_nodes: 10.204.216.103,10.204.216.96,10.204.216.95
heat_opencontrail_init_image_full: 10.204.217.152:5000/contrail-openstack-heat-init:ocata-master-117
contrail_api_interface_address: 10.204.216.103
enable_barbican: True
enable_opencontrail_rbac: yes
customize_etc_hosts: False
enable_ironic: no
kolla_external_vip_address: 10.10.10.20
enable_haproxy: yes
nova_compute_opencontrail_init_image_full: 10.204.217.152:5000/contrail-openstack-compute-init:ocata-master-117
docker_namespace: kolla
kolla_internal_vip_address: 10.204.216.140
neutron_plugin_agent: opencontrail

[root@nodem14 ~]# cat contrail-kolla-ansible/ansible/host_vars/10.204.216.103.yml
---

network_interface: eno1
kolla_external_vip_interface: eno1

current interfaces/addresses on host:
[root@nodem14 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 0c:c4:7a:dc:42:c8 brd ff:ff:ff:ff:ff:ff
    inet 10.204.216.103/24 brd 10.204.216.255 scope global dynamic eno1
       valid_lft 858691sec preferred_lft 858691sec
    inet6 fe80::ec4:7aff:fedc:42c8/64 scope link
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:dc:42:c9 brd ff:ff:ff:ff:ff:ff
4: ens2f0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 0c:c4:7a:ea:b3:64 brd ff:ff:ff:ff:ff:ff
5: ens2f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 0c:c4:7a:ea:b3:65 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.14/24 brd 10.10.10.255 scope global ens2f1
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:feea:b365/64 scope link
       valid_lft forever preferred_lft forever

Revision history for this message
Andrey Pavlov (apavlov-e) wrote :

there is a workaround may work here - explicitly add both interfaces to each controller:

  nodem14:
      ip: 10.204.216.103
      provider: bms
      roles:
          analytics: null
          analytics_database: null
          config: null
          config_database: null
          control: null
          openstack:
            kolla_external_vip_interface: ens2f1
            network_interface: eno1
          webui: null

Revision history for this message
Ritam Gangopadhyay (ritam) wrote :

Changed config:-

   kolla_internal_vip_address: 10.204.216.140
   keepalived_virtual_router_id: "151"
   contrail_api_interface_address: 10.204.216.103

I am still running into issues after adding keepalived virtual router id
This is the error I see on openstack provisioning while bringing up the VIP

*********************************************
*********************************************

2018-06-01 23:24:28,346 p=32710 u=root | TASK [haproxy : Waiting for virtual IP to appear] **************************************************************************************************************************************************
2018-06-01 23:24:58,834 p=32710 u=root | fatal: [10.204.216.95]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 10.204.216.95 closed.\r\n", "unreachable": true}
2018-06-01 23:25:58,992 p=32710 u=root | fatal: [10.204.216.96]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 10.204.216.96 closed.\r\n", "unreachable": true}
2018-06-01 23:29:29,553 p=32710 u=root | fatal: [10.204.216.103]: FAILED! => {"changed": false, "elapsed": 300, "msg": "Timeout when waiting for 10.204.216.140:3306"}
2018-06-01 23:29:29,558 p=32710 u=root | to retry, use: --limit @/root/contrail-ansible-deployer/playbooks/install_openstack.retry

*****************************************************

On looking at nodem7 - 10.204.216.96 I see this:-
[root@nodem7 ~]# ip addr
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
   link/ether 0c:c4:7a:dc:44:3a brd ff:ff:ff:ff:ff:ff
   inet 10.204.216.96/24 brd 10.204.216.255 scope global dynamic eno1
      valid_lft 862969sec preferred_lft 862969sec
   inet 10.204.216.140/32 scope global eno1
      valid_lft forever preferred_lft forever
   inet 10.204.216.103/32 scope global eno1 <<<<<<<<<<<<<<<<<<< Nodem14 IP address configured on this interface along with the VIP
      valid_lft forever preferred_lft forever
   inet6 fe80::ec4:7aff:fedc:443a/64 scope link
      valid_lft forever preferred_lft forever

*********************************************************

keepalived docker logs are available in nodem14 ---- 10.204.216.103 ---- /root/keepalived_docker.logs

Revision history for this message
Ritam Gangopadhyay (ritam) wrote :

Seems we need to add external vip same as internal vip or else it picks the ssh IP and creates the issue mentioned in comment #8

So changed config looks like:-

   kolla_internal_vip_address: 10.204.216.140
   kolla_external_vip_address: 10.204.216.140
   keepalived_virtual_router_id: "151"
   contrail_api_interface_address: 10.204.216.103

Revision history for this message
Ritam Gangopadhyay (ritam) wrote :

Final config that works for multi interface setup is the following instances.yaml file:-

global_configuration:
   REGISTRY_PRIVATE_INSECURE: True
   CONTAINER_REGISTRY: 10.204.217.152:5000
provider_config:
  bms:
    domainsuffix: englab.juniper.net
    ntpserver: 10.204.217.158
    ssh_pwd: c0ntrail123
    ssh_user: root

instances:
  nodem10:
      ip: 10.204.216.99
      provider: bms
      roles:
          openstack_compute: null
          vrouter:
              PHYSICAL_INTERFACE: ens2f1
  nodem14:
      ip: 10.204.216.103
      provider: bms
      roles:
          analytics: null
          analytics_database: null
          config: null
          config_database: null
          control: null
          openstack: null
          webui: null
  nodem6:
      ip: 10.204.216.95
      provider: bms
      roles:
          analytics: null
          analytics_database: null
          config: null
          config_database: null
          control: null
          openstack: null
          webui: null
  nodem7:
      ip: 10.204.216.96
      provider: bms
      roles:
          analytics: null
          analytics_database: null
          config: null
          config_database: null
          control: null
          openstack: null
          webui: null
  nodem8:
      ip: 10.204.216.97
      provider: bms
      roles:
          openstack_compute: null
          vrouter:
              PHYSICAL_INTERFACE: bond0
  nodem9:
      ip: 10.204.216.98
      provider: bms
      roles:
          openstack_compute: null
          vrouter:
              PHYSICAL_INTERFACE: ens2f1

contrail_configuration:
  CLOUD_ORCHESTRATOR: openstack
  OPENSTACK_NODES: 10.204.216.103,10.204.216.95,10.204.216.96
  CONTROLLER_NODES: 10.204.216.103,10.204.216.95,10.204.216.96
  CONTROL_NODES: 10.10.10.14,10.10.10.6,10.10.10.7
  CONTAINER_REGISTRY: 10.204.217.152:5000
  REGISTRY_PRIVATE_INSECURE: True
  CONTRAIL_VERSION: ocata-master-117
  RABBITMQ_NODE_PORT: 5673
  AUTH_MODE: keystone
  KEYSTONE_AUTH_HOST: 10.204.216.140
  KEYSTONE_AUTH_URL_VERSION: /v3
  KEYSTONE_AUTH_ADMIN_PASSWORD: c0ntrail123
  AAA_MODE: rbac
  VROUTER_GATEWAY: 10.10.10.101
  ENCAP_PRIORITY: VXLAN,MPLSoUDP,MPLSoGRE
  JVM_EXTRA_OPTS: -Xms1g -Xmx2g
  IPFABRIC_SERVICE_HOST: 10.204.216.140

kolla_config:
  kolla_globals:
    kolla_internal_vip_address: 10.204.216.140
    kolla_external_vip_address: 10.204.216.140
    keepalived_virtual_router_id: 151
    contrail_api_interface_address: 10.204.216.103
    docker_registry: docker.io
    docker_namespace: kolla
    enable_haproxy: "yes"
    enable_ironic: "no"
    enable_swift: "no"
  kolla_passwords:
    metadata_secret: c0ntrail123
    keystone_admin_password: c0ntrail123

Revision history for this message
Ramprakash R (ramprakash) wrote :

Closing this bug based on comment #10

Revision history for this message
Soumil Kulkarni (soumilk) wrote :
Download full text (4.4 KiB)

Saw this bug again today.

2018-06-06 07:59:29,325 p=8742 u=root | TASK [haproxy : Waiting for virtual IP to appear] ******************************

2018-06-06 08:04:31,986 p=8742 u=root | fatal: [10.87.74.210]: FAILED! => {"changed": false, "elapsed": 301, "msg": "Timeout when waiting for 10.87.74.223:3306"}
2018-06-06 08:04:31,990 p=8742 u=root | fatal: [10.87.74.212]: FAILED! => {"changed": false, "elapsed": 301, "msg": "Timeout when waiting for 10.87.74.223:3306"}
2018-06-06 08:04:31,992 p=8742 u=root | fatal: [10.87.74.211]: FAILED! => {"changed": false, "elapsed": 301, "msg": "Timeout when waiting for 10.87.74.223:3306"}
2018-06-06 08:04:31,997 p=8742 u=root | to retry, use: --limit @/var/tmp/contrail_cluster/5fdbfd80-f0d0-481a-a9a8-fd2859b4e83a/contrail-ansible-deployer/playbooks/install_openstack.retry

2018-06-06 08:04:31,998 p=8742 u=root | PLAY RECAP *********************************************************************
2018-06-06 08:04:31,998 p=8742 u=root | 10.87.74.210 : ok=100 changed=18 unreachable=0 failed=1
2018-06-06 08:04:31,999 p=8742 u=root | 10.87.74.211 : ok=99 changed=16 unreachable=0 failed=1
2018-06-06 08:04:31,999 p=8742 u=root | 10.87.74.212 : ok=99 changed=16 unreachable=0 failed=1
2018-06-06 08:04:31,999 p=8742 u=root | 10.87.74.213 : ok=4 changed=0 unreachable=0 failed=0
2018-06-06 08:04:31,999 p=8742 u=root | 10.87.74.214 : ok=4 changed=0 unreachable=0 failed=0
2018-06-06 08:04:32,000 p=8742 u=root | 10.87.74.215 : ok=4 changed=0 unreachable=0 failed=0
2018-06-06 08:04:32,000 p=8742 u=root | 10.87.74.216 : ok=69 changed=11 unreachable=0 failed=0
2018-06-06 08:04:32,000 p=8742 u=root | 10.87.74.217 : ok=76 changed=15 unreachable=0 failed=0
2018-06-06 08:04:32,000 p=8742 u=root | localhost : ok=7 changed=2 unreachable=0 failed=0

Here is the config that has been used.

global_configuration:
  CONTAINER_REGISTRY: 10.84.5.81:5000
  REGISTRY_PRIVATE_INSECURE: True
provider_config:
  bms:
    ssh_user: root
    ssh_pwd: c0ntrail123
    ntp_server: 10.84.5.100
    domainsuffix: local
instances:
  5c3s3-node3-vm4:
    ip: 10.87.74.213
    provider: bms
    roles:
      config:
      config_database:
      control:
      webui:
      analytics:
      analytics_database:
  5c3s3-node3-vm6:
    ip: 10.87.74.215
    provider: bms
    roles:
      config:
      config_database:
      control:
      webui:
      analytics:
      analytics_database:
  5c3s3-node3-vm5:
    ip: 10.87.74.214
    provider: bms
    roles:
      config:
      config_database:
      control:
      webui:
      analytics:
      analytics_database:
  5c3s3-node4-vm1:
    ip: 10.87.74.216
    provider: bms
    roles:
      vrouter:
        VROUTER_GATEWAY: 192.168.103.254
      openstack_compute:
  5c3s3-node4-vm2:
    ip: 10.87.74.217
    provider: bms
    roles:
      vrouter:
        VROUTER_GATEWAY: 192.168.103.254
      openstack_compute:
  5c3s3-node3-vm1:
    ip: 10.87.74.210
    provider: bms
    roles:
      openstack_control:
...

Read more...

Revision history for this message
Andrey Pavlov (apavlov-e) wrote :

@Soumil, looks like it's not a proble mof ansible-deployer. It looks as another problem of keepalived/VRRP in your network.

Revision history for this message
Soumil Kulkarni (soumilk) wrote :

@apavlov-e yes. Thank you for pointing it out. I changed the keepalived_virtual_router_id in the config and the provision went ahead. Marking this bug as invalid.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.