Unable to create volumes

Bug #1922138 reported by Mario Chirinos
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Bundles
New
Undecided
Unassigned

Bug Description

I recently got the following message while trying to create volumes in
openstack base #73

"schedule allocate volume:Could not find any available weighted backend. "

But all juju service are up and I was able to create volumes before.

2021-03-31 23:36:34.933 56108 ERROR cinder.scheduler.flows.create_volume [req-915b912c-7749-4f81-a5c2-f68eba8c10bd 0b03f22042a54f8488ef2a4b785c24d5 c37066870a494056bbacc67d519d7558 - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available

summary: - Unable to vreate volumes
+ Unable to create volumes
description: updated
description: updated
description: updated
Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

Thanks you for your bug report. Unfortunately, there's not enough information to go on. OpenStack systems are complex, so to even begin to understand what the issue is we need to know:

1. Version of Ubuntu
2. Version of Openstack
3. openstack-bundle / configuration used to deploy the system
4. logs from the affected services (nova, cinder)

Changed in openstack-bundles:
status: New → Incomplete
Revision history for this message
Mario Chirinos (mario-chirinos) wrote :
Download full text (4.3 KiB)

I am attaching the information, the problem seams to see the space ( Insufficient free virtual space (0.0GB) to accommodate thin provisioned 1GB volume on host cinder@cinder-ceph#cinder-ceph.
)
But ceph status shows 123 TiB availables
1. Focal
2. openstack 4.0.0
   openstack-dashboard 18.6.1
   keystone 18.0.0
   nova-cloud-controller 22.0.1
   nova-compute 22.0.1
   nova-mysql-router 8.0.23

3. openstack base #73

4. tail /var/log/cinder/cinder-scheduler.log
2021-04-01 07:04:07.506 207586 INFO cinder.message.api [req-3fbd8c81-77cb-4cf5-acb2-5628380965e9 3921d264a0fb4c75a7bd0ebdde1dd5ab a76bc58217ee40349a60e9d658ad80b9 - - -] Creating message record for request_id = req-3fbd8c81-77cb-4cf5-acb2-5628380965e9
2021-04-01 07:04:07.551 207586 ERROR cinder.scheduler.flows.create_volume [req-3fbd8c81-77cb-4cf5-acb2-5628380965e9 3921d264a0fb4c75a7bd0ebdde1dd5ab a76bc58217ee40349a60e9d658ad80b9 - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available
2021-04-01 07:14:37.396 207586 WARNING py.warnings [req-992bd199-3347-4693-ac8a-d811ecd7fc1c 3921d264a0fb4c75a7bd0ebdde1dd5ab a76bc58217ee40349a60e9d658ad80b9 - - -] /usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (3719, "'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.")
  result = self._query(query)

2021-04-01 07:14:37.411 207586 WARNING cinder.scheduler.filters.capacity_filter [req-992bd199-3347-4693-ac8a-d811ecd7fc1c 3921d264a0fb4c75a7bd0ebdde1dd5ab a76bc58217ee40349a60e9d658ad80b9 - - -] Insufficient free virtual space (0.0GB) to accommodate thin provisioned 1GB volume on host cinder@cinder-ceph#cinder-ceph.
2021-04-01 07:14:37.413 207586 INFO cinder.scheduler.base_filter [req-992bd199-3347-4693-ac8a-d811ecd7fc1c 3921d264a0fb4c75a7bd0ebdde1dd5ab a76bc58217ee40349a60e9d658ad80b9 - - -] Filtering removed all hosts for the request with volume ID 'e0d0a425-97e7-4fda-b865-d3ae493aa983'. Filter results: AvailabilityZoneFilter: (start: 1, end: 1), CapacityFilter: (start: 1, end: 0), CapabilitiesFilter: (start: 0, end: 0)
2021-04-01 07:14:37.413 207586 WARNING cinder.scheduler.filter_scheduler [req-992bd199-3347-4693-ac8a-d811ecd7fc1c 3921d264a0fb4c75a7bd0ebdde1dd5ab a76bc58217ee40349a60e9d658ad80b9 - - -] No weighed backend found for volume with properties: {'id': 'de03abac-2e8b-4d70-a960-bda031d7b1dc', 'name': '__DEFAULT__', 'description': 'Default Volume Type', 'is_public': True, 'projects': [], 'extra_specs': {}, 'qos_specs_id': None, 'created_at': '2021-02-21T07:11:47.000000', 'updated_at': '2021-02-21T07:11:47.000000', 'deleted_at': None, 'deleted': False}
2021-04-01 07:14:37.413 207586 INFO cinder.message.api [req-992bd199-3347-4693-ac8a-d811ecd7fc1c 3921d264a0fb4c75a7bd0ebdde1dd5ab a76bc58217ee40349a60e9d658ad80b9 - - -] Creating message record for request_id = req-992bd199-3347-4693-ac8a-d811ecd7fc1c
2021-04-01 07:14:37.463 207586 ERROR cinder.scheduler.flows.create_vol...

Read more...

Revision history for this message
Mario Chirinos (mario-chirinos) wrote :

ubuntu@juju-191a8d-1-lxd-1:~$ tail /var/log/cinder/cinder-scheduler.log
2021-04-02 02:49:10.717 207586 INFO cinder.message.api [req-cc778ea8-64ce-4870-abd9-4c7d8ef2e18a 0b03f22042a54f8488ef2a4b785c24d5 c37066870a494056bbacc67d519d7558 - - -] Creating message record for request_id = req-cc778ea8-64ce-4870-abd9-4c7d8ef2e18a
2021-04-02 02:49:10.755 207586 ERROR cinder.scheduler.flows.create_volume [req-cc778ea8-64ce-4870-abd9-4c7d8ef2e18a 0b03f22042a54f8488ef2a4b785c24d5 c37066870a494056bbacc67d519d7558 - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available
2021-04-02 02:51:57.704 207586 WARNING py.warnings [req-783aafef-c024-4da0-b5ee-b2628b04a01f 0b03f22042a54f8488ef2a4b785c24d5 c37066870a494056bbacc67d519d7558 - - -] /usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (3719, "'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous.")
  result = self._query(query)

2021-04-02 02:51:57.721 207586 WARNING cinder.scheduler.filters.capacity_filter [req-783aafef-c024-4da0-b5ee-b2628b04a01f 0b03f22042a54f8488ef2a4b785c24d5 c37066870a494056bbacc67d519d7558 - - -] Insufficient free virtual space (0.0GB) to accommodate thin provisioned 1024GB volume on host cinder@cinder-ceph#cinder-ceph.
2021-04-02 02:51:57.721 207586 INFO cinder.scheduler.base_filter [req-783aafef-c024-4da0-b5ee-b2628b04a01f 0b03f22042a54f8488ef2a4b785c24d5 c37066870a494056bbacc67d519d7558 - - -] Filtering removed all hosts for the request with volume ID '17e13104-29ae-4564-8b3d-167d3fe24236'. Filter results: AvailabilityZoneFilter: (start: 1, end: 1), CapacityFilter: (start: 1, end: 0), CapabilitiesFilter: (start: 0, end: 0)
2021-04-02 02:51:57.721 207586 WARNING cinder.scheduler.filter_scheduler [req-783aafef-c024-4da0-b5ee-b2628b04a01f 0b03f22042a54f8488ef2a4b785c24d5 c37066870a494056bbacc67d519d7558 - - -] No weighed backend found for volume with properties: {'id': 'de03abac-2e8b-4d70-a960-bda031d7b1dc', 'name': '__DEFAULT__', 'description': 'Default Volume Type', 'is_public': True, 'projects': [], 'extra_specs': {}, 'qos_specs_id': None, 'created_at': '2021-02-21T07:11:47.000000', 'updated_at': '2021-02-21T07:11:47.000000', 'deleted_at': None, 'deleted': False}
2021-04-02 02:51:57.722 207586 INFO cinder.message.api [req-783aafef-c024-4da0-b5ee-b2628b04a01f 0b03f22042a54f8488ef2a4b785c24d5 c37066870a494056bbacc67d519d7558 - - -] Creating message record for request_id = req-783aafef-c024-4da0-b5ee-b2628b04a01f
2021-04-02 02:51:57.759 207586 ERROR cinder.scheduler.flows.create_volume [req-783aafef-c024-4da0-b5ee-b2628b04a01f 0b03f22042a54f8488ef2a4b785c24d5 c37066870a494056bbacc67d519d7558 - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available

Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

@Mario, thanks for reporting! I'm looking into it now. It's not intuitive I know but next time you should set the bug back to New when you reply (after we have set it to Incomplete waiting for more info), otherwise we may just miss your reply, thanks!

Changed in openstack-bundles:
status: Incomplete → New
assignee: nobody → Aurelien Lourot (aurelien-lourot)
Revision history for this message
Aurelien Lourot (aurelien-lourot) wrote :

@Mario, did you adapt the osd-devices value [0] in the bundle to the machines you're deploying on?

Could it be that on one of your 3 machines where ceph-osd is running, this partition is full? Thanks! (As explained, setting to Incomplete, please set back to New when answering)

[0] https://jaas.ai/openstack-base

Changed in openstack-bundles:
status: New → Incomplete
assignee: Aurelien Lourot (aurelien-lourot) → nobody
Revision history for this message
Mario Chirinos (mario-chirinos) wrote :
Download full text (5.4 KiB)

@Aurelien thanks I didn't know I had to set the bug to new, I will do it from now on.

I dindt change the osd-devices configuration I used the default configuration and added an other ceph unit (130TB) after deployment with:

juju add-unit ceph-osd
juju run-action --wait ceph-osd/3 add-disk osd-devices=/dev/sdc.

the space available in each unit is
geoint@maas:~$ juju ssh 0
ubuntu@PowerEdge-9R0DH13:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 3.0M 13G 1% /run
/dev/sda2 274G 49G 212G 19% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 511M 7.9M 504M 2% /boot/efi
/dev/loop2 56M 56M 0 100% /snap/core18/1988
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/loop7 73M 73M 0 100% /snap/lxd/19766
/dev/loop4 69M 69M 0 100% /snap/lxd/19823
/dev/loop6 33M 33M 0 100% /snap/snapd/11402
/dev/loop1 56M 56M 0 100% /snap/core18/1997
/dev/loop8 33M 33M 0 100% /snap/snapd/11588
tmpfs 13G 0 13G 0% /run/user/1000

geoint@maas:~$ juju ssh 1
ubuntu@PowerEdge-9R0FH13:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 3.1M 13G 1% /run
/dev/sda2 274G 45G 215G 18% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 511M 7.9M 504M 2% /boot/efi
/dev/loop0 56M 56M 0 100% /snap/core18/1988
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/loop7 73M 73M 0 100% /snap/lxd/19766
/dev/loop8 69M 69M 0 100% /snap/lxd/19823
/dev/loop6 33M 33M 0 100% /snap/snapd/11402
/dev/loop1 56M 56M 0 100% /snap/core18/1997
/dev/loop9 33M 33M 0 100% /snap/snapd/11588
tmpfs 13G 0 13G 0% /run/user/1000

geoint@maas:~$ juju ssh 2
ubuntu@PowerEdge-9R0CH13:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 3.1M 13G 1% /run
/dev/sda2 274G 52G 208G 20% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 511M 7.9M 504M 2% /boot/efi
/dev/loop0 56M 56M 0 100% /snap/core18/1988
tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
/dev/loop7 73M 73M 0 100% /snap/lxd/19766
/dev/loop4 69M 69M 0 100% /snap/lxd/19823
/dev/loop6 33M 33M 0 100% /snap/snapd/11402
/dev/loop1 56M 56M 0 100% /snap/core18/1997
/dev/loop8 33M 33M 0 100% /snap/snapd/11588
tmpfs 13G 0 13G 0% /run/user/1000

geoint@maas:~$ juju ssh 3
ubuntu@NX3240-4XF2613:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.6G 0 7.6G 0% /dev
tmpfs 1.6G 1.6M 1.6G 1% /run
/dev/sda2 1...

Read more...

Changed in openstack-bundles:
status: Incomplete → New
Revision history for this message
Mario Chirinos (mario-chirinos) wrote :

geoint@maas:~$ juju ssh ceph-mon/2 sudo ceph health detail
HEALTH_ERR 1 full osd(s); 18 pool(s) full
[ERR] OSD_FULL: 1 full osd(s)
    osd.0 is full
[WRN] POOL_FULL: 18 pool(s) full
    pool 'device_health_metrics' is full (no space)
    pool 'default.rgw.buckets.data' is full (no space)
    pool 'default.rgw.control' is full (no space)
    pool 'default.rgw.data.root' is full (no space)
    pool 'default.rgw.gc' is full (no space)
    pool 'default.rgw.log' is full (no space)
    pool 'default.rgw.intent-log' is full (no space)
    pool 'default.rgw.meta' is full (no space)
    pool 'default.rgw.usage' is full (no space)
    pool 'default.rgw.users.keys' is full (no space)
    pool 'default.rgw.users.email' is full (no space)
    pool 'default.rgw.users.swift' is full (no space)
    pool 'default.rgw.users.uid' is full (no space)
    pool 'default.rgw.buckets.extra' is full (no space)
    pool 'default.rgw.buckets.index' is full (no space)
    pool '.rgw.root' is full (no space)
    pool 'cinder-ceph' is full (no space)
    pool 'glance' is full (no space)
Connection to 192.168.221.12 closed.

Revision history for this message
Zarki Salleh (zarkisalleh1) wrote :

I recently updated to openstack base #75, and this affects me specifically when creating a volume from an image, e.g. creating a volume from the Ubuntu focal cloud image.

The message "schedule allocate volume:Could not find any available weighted backend. "

I can however create normal volumes and they work fine.

I can create a volume from image on openstack base #70 and #73.

I have tried fresh & clean deploys with focal multiple times and have adapted osd-devices. I can replicate the issue.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.