[cinder][netapp] incorrect snapshot and volume files owner

Bug #1610991 reported by Ivan Kolodyazhny
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Status tracked in 10.0.x
Ivan Kolodyazhny
Ivan Kolodyazhny
Ivan Kolodyazhny

Bug Description

Steps to reproduce:

1. Deploy environment with NetApp dataontap NFS Driver
2. Boot instance from image ('create new volume' option)
3. Create snapshot from volume, created in step #2
4. Create volume from snapshot created in step #3

Expected results:
volume created successfully.

Actual result:
Volume is in error state:

Ivan Kolodyazhny (e0ne)
Changed in mos:
milestone: none → 8.0-updates
no longer affects: mos/9.x
tags: added: customer-found
tags: added: area-cinder
Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

We decided that is't a configuration-related issue

Revision history for this message
Roman Sokolkov (rsokolkov) wrote :

Is this still actual?

What do you mean by configuration-related issue? NetApp side? or OpenStack side?

Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

Roman, the following params should be added to cinder.conf for NetApp driver:

Revision history for this message
Denis Meltsaykin (dmeltsaykin) wrote :

Moving to Invalid, as this is a configuration related issue and over a month in Incomplete. Please feel free to re-open if needed.

Revision history for this message
Goutham Pacha Ravi (gouthamr) wrote :

Jumping in after nearly a year, this is a configuration issue, but the solution mentioned on this bug isn't really complete.

For setting up a nas_secure_* environment, please refer to the NetApp deployment guide:


To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.