"pcs status" gives a traceback

Bug #1652196 reported by Dmitry Sutyagin
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Invalid
Undecided
Dmitry Sutyagin

Bug Description

Reproduced in a newly deployed 9.1 env (deployed with Fuel 9.0 iso, Fuel itself not updated to 9.1):

root@node-1:~# pcs status
Cluster name:
WARNING: corosync and pacemaker node names do not match (IPs used in setup?)
Last updated: Fri Dec 23 01:15:22 2016 Last change: Fri Dec 23 00:34:28 2016 by root via crm_attribute on node-1.domain.tld
Stack: corosync
Current DC: node-1.domain.tld (version 1.1.14-70404b0) - partition with quorum
3 nodes and 46 resources configured

Online: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]

Full list of resources:

 Clone Set: clone_p_vrouter [p_vrouter]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 vip__management (ocf::fuel:ns_IPaddr2): Started node-1.domain.tld
 vip__vrouter_pub (ocf::fuel:ns_IPaddr2): Started node-2.domain.tld
 vip__vrouter (ocf::fuel:ns_IPaddr2): Started node-2.domain.tld
 vip__public (ocf::fuel:ns_IPaddr2): Started node-4.domain.tld
 Clone Set: clone_p_haproxy [p_haproxy]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 Clone Set: clone_p_mysqld [p_mysqld]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 sysinfo_node-4.domain.tld (ocf::pacemaker:SysInfo): Started node-4.domain.tld
 sysinfo_node-2.domain.tld (ocf::pacemaker:SysInfo): Started node-2.domain.tld
 Master/Slave Set: master_p_conntrackd [p_conntrackd]
     Masters: [ node-2.domain.tld ]
     Slaves: [ node-1.domain.tld node-4.domain.tld ]
 Master/Slave Set: master_p_rabbitmq-server [p_rabbitmq-server]
     Masters: [ node-1.domain.tld ]
     Slaves: [ node-2.domain.tld node-4.domain.tld ]
 Clone Set: clone_neutron-openvswitch-agent [neutron-openvswitch-agent]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 Clone Set: clone_neutron-l3-agent [neutron-l3-agent]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 Clone Set: clone_neutron-metadata-agent [neutron-metadata-agent]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 Clone Set: clone_p_heat-engine [p_heat-engine]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 Clone Set: clone_neutron-dhcp-agent [neutron-dhcp-agent]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 Clone Set: clone_p_dns [p_dns]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 sysinfo_node-1.domain.tld (ocf::pacemaker:SysInfo): Started node-1.domain.tld
 Clone Set: clone_ping_vip__public [ping_vip__public]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]
 Clone Set: clone_p_ntp [p_ntp]
     Started: [ node-1.domain.tld node-2.domain.tld node-4.domain.tld ]

PCSD Status:
  node-1.domain.tld member (192.168.0.5): Offline
  node-2.domain.tld member (192.168.0.6): Offline
  node-4.domain.tld member (192.168.0.7): Offline

Daemon Status:
  corosync: unknown/Failed to issue method call: No such file or directory
  pacemaker: unknown/
Traceback (most recent call last):
  File "/usr/sbin/pcs", line 219, in <module>
    main(sys.argv[1:])
  File "/usr/sbin/pcs", line 159, in main
    cmd_map[command](argv)
  File "/usr/lib/python2.7/dist-packages/pcs/status.py", line 16, in status_cmd
    full_status()
  File "/usr/lib/python2.7/dist-packages/pcs/status.py", line 64, in full_status
    utils.serviceStatus(" ")
  File "/usr/lib/python2.7/dist-packages/pcs/utils.py", line 1961, in serviceStatus
    print prefix + daemons[i] + ": " + status[i] + "/" + enabled[i]
IndexError: list index out of range

root@node-1:~# dpkg -l | grep pcs
ii pcs 0.9.144-1~u14.04+mos1 all Pacemaker command line interface

Tags: support
Changed in mos:
milestone: none → 9.2
Revision history for this message
Denis Meltsaykin (dmeltsaykin) wrote :

Cannot confirm the issue, it works for me w/o any tracebacks. Please provide exact steps to reproduce and a diagnostic snapshot of the affected environment.

Changed in mos:
assignee: MOS Maintenance (mos-maintenance) → Dmitry Sutyagin (dsutyagin)
status: New → Incomplete
Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :

9.2 confirmed the same issue is present

Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :

my pcs is mos1
root@node-9:~# apt-cache policy pcs
pcs:
  Installed: 0.9.144-1~u14.04+mos1
  Candidate: 0.9.144-1~u14.04+mos1
  Version table:
 *** 0.9.144-1~u14.04+mos1 0
       1050 http://10.20.16.2:8080/mitaka-9.0/ubuntu/x86_64/ mos9.0/main amd64 Packages
        100 /var/lib/dpkg/status

Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

Not reproduced with pcs 0.9.144-1~u14.04+mos2 package

Revision history for this message
Dmitry Sutyagin (dsutyagin) wrote :
Changed in mos:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.