Turning ON 'nas_secure_operations' cause migration fail

Bug #1402745 reported by Yogesh
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
New
Low
Unassigned

Bug Description

When trying migration from cmodeNFS to cmodeiSCSI while the 'nas_secure_operations' is set to true, the migration fails with below "permission denied" error -

2014-12-11 10:57:02.068 DEBUG oslo_concurrency.processutils [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] CMD "dd
if=/dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0 of=/dev/null count=1" returned: 1 in 0.015221118927s from
(pid=3405) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:216
2014-12-11 10:57:02.070 DEBUG oslo_concurrency.processutils [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] u'dd if=
/dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0 of=/dev/null count=1' failed. Not Retrying. from (pid=3405) e
xecute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:249
2014-12-11 10:57:02.071 ERROR cinder.brick.initiator.connector [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] Faile
d to access the device on the path /dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0: dd: failed to open '/dev/
disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0': Permission denied

2014-12-11 10:57:02.068 DEBUG oslo_concurrency.processutils [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] CMD "dd
if=/dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0 of=/dev/null count=1" returned: 1 in 0.015221118927s from
(pid=3405) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:216
2014-12-11 10:57:02.070 DEBUG oslo_concurrency.processutils [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] u'dd if=
/dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0 of=/dev/null count=1' failed. Not Retrying. from (pid=3405) e
xecute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:249
2014-12-11 10:57:02.071 ERROR cinder.brick.initiator.connector [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] Faile
d to access the device on the path /dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0: dd: failed to open '/dev/
disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0': Permission denied
 None.
2014-12-11 10:57:02.072 ERROR cinder.volume.driver [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] Failed to attach
volume a8abbd5d-6944-44f3-9d48-063c73a4bdd9
2014-12-11 10:57:02.125 ERROR cinder.volume.manager [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] Failed to copy v
olume 498394ed-f5c7-499f-80d2-7441c46302b1 to a8abbd5d-6944-44f3-9d48-063c73a4bdd9
2014-12-11 10:57:02.167 DEBUG oslo_concurrency.lockutils [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] Acquired fi
le lock "/opt/stack/data/cinder/cinder-a8abbd5d-6944-44f3-9d48-063c73a4bdd9-delete_volume" after waiting 0.000s from (pid=3402) acquire /usr/local/lib/python2.7/dist-packages/os
lo_concurrency/lockutils.py:198
2014-12-11 10:57:02.167 DEBUG oslo_concurrency.lockutils [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] Lock "a8abb
d5d-6944-44f3-9d48-063c73a4bdd9-delete_volume" acquired by "lvo_inner2" :: waited 0.001s from (pid=3402) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.
py:430
2014-12-11 10:57:02.206 INFO cinder.volume.manager [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] volume a8abbd5d-6
944-44f3-9d48-063c73a4bdd9: deleting
2014-12-11 10:57:02.208 DEBUG cinder.volume.manager [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] volume a8abbd5d-
6944-44f3-9d48-063c73a4bdd9: removing export from (pid=3402) delete_volume /opt/stack/cinder/cinder/volume/manager.py:446
2014-12-11 10:57:02.208 DEBUG cinder.volume.manager [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] volume a8abbd5d-
6944-44f3-9d48-063c73a4bdd9: deleting from (pid=3402) delete_volume /opt/stack/cinder/cinder/volume/manager.py:448
2014-12-11 10:57:02.237 ERROR oslo.messaging.rpc.dispatcher [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] Exceptio
n during message handling: The device in the path /dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0 is unavaila
ble: Unable to access the backend storage via the path /dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0.
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 137, in _dispatch_and_reply
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 180, in _dispatch
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 126, in _do_dispatch
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher return f(*args, **kwargs)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher return f(*args, **kwargs)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 1212, in migrate_volume
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher self.db.volume_update(ctxt, volume_ref['id'], updates)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 82, in __exit__
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 1203, in migrate_volume
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher new_type_id)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 1068, in _migrate_volume_generic
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher new_volume['migration_status'] = None
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 82, in __exit__
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 1047, in _migrate_volume_generic
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher remote='dest')
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/driver.py", line 386, in copy_volume_data
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher {'status': dest_orig_status})
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/utils/excutils.py", line 82, in __exit__
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/driver.py", line 380, in copy_volume_data
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher remote=dest_remote)
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/driver.py", line 521, in _attach_volume
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher {'path': host_device}))
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher DeviceUnavailable: The device in the path /dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0 is unavailable: Unable to access the backend storage via the path /dev/disk/by-path/ip-172.20.124.19:3260-iscsi-iqn.1992-08.com.netapp:sn.ef091da3e69811e3a5a1123478563412:vs.8-lun-0.
2014-12-11 10:57:02.237 TRACE oslo.messaging.rpc.dispatcher
2014-12-11 10:57:02.259 DEBUG cinder.volume.drivers.netapp.dataontap.client.client_base [req-053024c7-db31-409b-8b6f-c4ced74e96ef 798fe77714104ec08036432f225dae1a b9d3fed2e495441891e15094be5ba16a] Destroyed LUN volume-a8abbd5d-6944-44f3-9d48-063c73a4bdd9 from (pid=3402) destroy_lun /opt/stack/cinder/cinder/volume/drivers/netapp/dataontap/client/client_base.py:97

Tags: drivers nfs
affects: testrepository → cinder
Revision history for this message
Eric Harney (eharney) wrote :

The nas_secure_file_operations option determines whether some operations are run as root or as the Cinder user. It looks like the default of "auto" may be problematic due to issues like the one reported here -- some operations aren't going to work if you don't run commands as root.

Should we really be defaulting this option to auto for all drivers?

Changed in cinder:
importance: Undecided → Low
Gorka Eguileor (gorka)
tags: added: drivers nfs
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.