swap_volume failed

Bug #1825354 reported by ye
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
New
Undecided
Unassigned

Bug Description

Description
===========
volume migration failed but have no feedback

Steps to reproduce
==================
* I tried to migrate an attached volume.

    $ cinder migrate 7de63101-7616-47ce-b6ed-39d4df8e2907 control.az01.rocky1@ceph-rbd

* then I show the status of new volume and old volume:

    new volume:
    | migration_status | target:7de63101-7616-47ce-b6ed-39d4df8e2907 |
    | os-vol-mig-status-attr:migstat | target:7de63101-7616-47ce-b6ed-39d4df8e2907 |
    | status | available |

    old volume:
    | migration_status | migrating |
    | os-vol-mig-status-attr:migstat | migrating |
    | status | in-use |

* then I searched the nova-compute.log.

    2019-04-18 14:56:26.288 27737 ERROR nova.compute.manager [instance: d111dfa2-61e2-4b4d-8574-6527acf1ab97] Traceback (most recent call last):
    2019-04-18 14:56:26.288 27737 ERROR nova.compute.manager [instance: d111dfa2-61e2-4b4d-8574-6527acf1ab97] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5723, in _swap_volume
    2019-04-18 14:56:26.288 27737 ERROR nova.compute.manager [instance: d111dfa2-61e2-4b4d-8574-6527acf1ab97] mountpoint, resize_to)
    2019-04-18 14:56:26.288 27737 ERROR nova.compute.manager [instance: d111dfa2-61e2-4b4d-8574-6527acf1ab97] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1556, in swap_volume
    2019-04-18 14:56:26.288 27737 ERROR nova.compute.manager [instance: d111dfa2-61e2-4b4d-8574-6527acf1ab97] raise NotImplementedError(_("Swap only supports host devices"))
    2019-04-18 14:56:26.288 27737 ERROR nova.compute.manager [instance: d111dfa2-61e2-4b4d-8574-6527acf1ab97] NotImplementedError: Swap only supports host devices
    2019-04-18 14:56:26.288 27737 ERROR nova.compute.manager [instance: d111dfa2-61e2-4b4d-8574-6527acf1ab97]

Expected result
===============
    new volume can be auto deleted,
    old volume:
    | migration_status | None |
    | os-vol-mig-status-attr:migstat | None |
    | status | in-use |

Actual result
=============
    these two volume waits for migrate success while the nova-compute.log occured ERROR.

Environment
===========
1. Exact version of OpenStack you are running. See the following
    $ rpm -qa | grep nova
    openstack-nova-conductor-18.1.0-1.el7.noarch
    openstack-nova-console-18.1.0-1.el7.noarch
    python2-novaclient-11.0.0-1.el7.noarch
    openstack-nova-compute-18.1.0-1.el7.noarch
    openstack-nova-api-18.1.0-1.el7.noarch
    openstack-nova-novncproxy-18.1.0-1.el7.noarch
    openstack-nova-common-18.1.0-1.el7.noarch
    openstack-nova-scheduler-18.1.0-1.el7.noarch
    python-nova-18.1.0-1.el7.noarch
    openstack-nova-placement-api-18.1.0-1.el7.noarch

2. Which storage type did you use?
   Ceph

3. Which networking type did you use?
   Neutron

Tags: cinder volumes
tags: added: cinder volumes
Revision history for this message
Matt Riedemann (mriedem) wrote :

There should be a failed instance action event associated with the instance but the server won't be in ERROR status. The NotImplementedError from the libvirt driver should also be handled, but maybe migrate_volume_completion isn't getting called because of another known bug being fixed here:

https://review.openstack.org/#/c/637224/

Revision history for this message
Lee Yarwood (lyarwood) wrote :

While this looks the same the logs show that it isn't a duplicate of bug #1803961. The issue here is that c-vol has attempted to migrate an attached rbd volume while the Libvirt Nova driver has no support for swapping between two attached rbd volumes at present. Should be easy to block on the c-api side or to rollback on c-vol given the error reported from n-cpu.

Revision history for this message
ye (dakele) wrote :

Description
===========
volume update invalid with no feedback

Steps to reproduce
==================
* I tried to update an attached volume.

    $ nova volume-update d111dfa2-61e2-4b4d-8574-6527acf1ab97 29fe0cb4-ce13-4b2c-8725-18e8f90afc56 c9032004-148c-481c-97db-30e0b1cee482

* then I show the status of new volume and old volume:

    new volume is still available

    old volume still in-use.

* I found the same error in nova-compute.log but the nova-client didn't show anything.

I think nova should have a feedback to inform user that there are errors in somewhere.

Revision history for this message
Lee Yarwood (lyarwood) wrote :

Apologies my previous comment was totally wrong, this is due to bug #1803961 thanks to the use of is_cinder_migration in the finally block below:

https://github.com/openstack/nova/blob/fc3890667e4971e3f0f35ac921c2a6c25f72adec/nova/compute/manager.py#L5918-L5920

We should actually be calling the else block here that calls back to Cinder to reset the volume state when failed=True:

https://github.com/openstack/nova/blob/fc3890667e4971e3f0f35ac921c2a6c25f72adec/nova/compute/manager.py#L5921-L5937

FWIW I agree with Matt's point here, this method really needs to get refactored and cleaned up in Train.

https://github.com/openstack/nova/blob/fc3890667e4971e3f0f35ac921c2a6c25f72adec/nova/compute/manager.py#L5885-L5889

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.