manila_tempest_tests.tests.scenario.test_share_extend fails with CephFS
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Shared File Systems Service (Manila) |
In Progress
|
Low
|
Goutham Pacha Ravi |
Bug Description
Description
===========
The scenario test in the manila_
The test fails at a stage after creating a share and attempting to write past its size:
The size in Ceph is a quota; and it is enforced by the ceph client rather than the server and the ceph documentation states:
"""
Once processes that write data to the file system reach the configured limit, a short period of time elapses between when the amount of data reaches the quota limit, and when the processes stop writing data. The time period generally measures in the tenths of seconds. However, processes continue to write data during that time. The amount of additional data that the processes write depends on the amount of time elapsed before they stop.
"""
So, there's possibly a sync delay where the client is able to write more data than is permitted.
The failure isn't occurring on the cephadm based NFS job (yet)
Ceph Jobs:
"Standalone" NFS: https:/
Cephadm based CephFS/Native: https:/
Cephadm based CephFS/NFS: https:/
Steps to reproduce
==================
A chronological list of steps which will help reproduce the issue you hit:
* Create a CephFS share of size 1 GiB (Native or with "Standalone" NFS-Ganesha)
* Fill up the share, write past 1 GiB
Expected result
===============
You're prevented to write past 1GiB
Actual result
=============
No failure occurs
Environment
===========
1. Exact version of OpenStack Manila you are running: master
2. Which storage backend did you use? Ceph, Reef (CephFS-Native)
tags: | added: temp |
tags: |
added: cephfs tempest removed: temp |
Changed in manila: | |
assignee: | nobody → Goutham Pacha Ravi (gouthamr) |
milestone: | none → dalmatian-rc1 |
importance: | Undecided → Low |
Sample failure log: https:/ /paste. opendev. org/show/ bmYaWampdRFUwGg Gkbd9/