hook failed: "ceph-access-relation-changed" after removing and re-adding cinder-ceph application
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Nova Compute Charm |
Triaged
|
Medium
|
Unassigned |
Bug Description
Nova-compute charm does not handle properly the situation, when cinder-ceph application is redeployed and re-related with compute charm; hook is failing complaining about virsh secret already existing.
Steps to reproduce:
ubuntu@OrangeBox84 ~ » juju remove-application cinder-ceph
removing application cinder-ceph
...
ubuntu@OrangeBox84 ~ » juju deploy cs:cinder-ceph
Located charm "cs:cinder-
Deploying charm "cs:cinder-
ubuntu@OrangeBox84 ~ » juju add-relation cinder-ceph cinder
ubuntu@OrangeBox84 ~ » juju add-relation cinder-ceph ceph-mon-cinder
ubuntu@OrangeBox84 ~ » juju add-relation cinder-ceph nova-compute
...
nova-compute/0 error idle 1 172.27.84.202 hook failed: "ceph-access-
All of the units are showing the following:
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
unit-nova-
The charm was able to proceed and turned to the "active/idle" state after I removed the secret from virsh on each nova-compute host:
ubuntu@OrangeBox84 ~/fce-demo (fce-on-orange-box *%) » juju run 'virsh secret-undefine 919fa72e-
- Stdout: |+
Secret 919fa72e-
UnitId: nova-compute/0
- Stdout: |+
Secret 919fa72e-
UnitId: nova-compute/1
- Stdout: |+
Secret 919fa72e-
UnitId: nova-compute/2
ubuntu@OrangeBox84 ~/fce-demo (fce-on-orange-box *%) » juju resolve nova-compute/0
ubuntu@OrangeBox84 ~/fce-demo (fce-on-orange-box *%) » juju resolve nova-compute/1
ubuntu@OrangeBox84 ~/fce-demo (fce-on-orange-box *%) » juju resolve nova-compute/2
ubuntu@OrangeBox84 ~/fce-demo (fce-on-orange-box *%) » juju status nova-compute
Model Controller Cloud/Region Version SLA Timestamp
openstack foundations-maas maas_cloud 2.7.0 unsupported 19:55:40Z
App Version Status Scale Charm Store Rev OS Notes
ceph-osd waiting 0 ceph-osd jujucharms 294 ubuntu
neutron-openvswitch 12.1.0 active 3 neutron-openvswitch jujucharms 269 ubuntu
nova-compute 17.0.12 active 3 nova-compute jujucharms 309 ubuntu
ntp 4.2.8p4+dfsg active 3 ntp jujucharms 35 ubuntu
Unit Workload Agent Machine Public address Ports Message
nova-compute/0 active idle 1 172.27.84.202 Unit is ready
neutron-
ntp/2 active idle 172.27.84.202 123/udp ntp: Ready
nova-compute/1* active idle 2 172.27.84.203 Unit is ready
neutron-
ntp/3 active idle 172.27.84.203 123/udp ntp: Ready
nova-compute/2 active idle 3 172.27.84.204 Unit is ready
neutron-
ntp/0* active idle 172.27.84.204 123/udp ntp: Ready
full juju status: https:/
Changed in charm-nova-compute: | |
status: | New → Triaged |
importance: | Undecided → Medium |
tags: | added: scaleback |
Changed in charm-nova-compute: | |
milestone: | none → 20.04 |
Changed in charm-nova-compute: | |
milestone: | 20.04 → none |
I just hit this bug in a xenial-queens openstack when relating to a new ceph cluster (bionic-queens luminous) using ceph-proxy.
Removing the nova-compute to cinder-ceph relation did not remove the libvirt secret.
My workaround was to remove <uuid>.base64 and <uuid>.xml in /etc/libvirt/ secrets and also run:
sudo virsh secret-undefine <uuid>