Performance issue related to S3 API or/and Keystone

Bug #1570797 reported by Alexander Petrov
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Status tracked in 10.0.x
Alexander Petrov
MOS Keystone
Alexander Petrov

Bug Description

Detailed bug description:

Rally load tests related to S3 show the low performance probably closely connected to Keystone. It is necessary to find out the cause of that behavior. Here the link to a more detailed report.

Description of the environment:

 MOS 8.0 with 6 nodes (CPU: 1 (1)HDD: 150.0 GB RAM: 3.0 GB)
 3 nodes - Controller
 3 nodes - Compute, Storage - Ceph OSD

tags: added: area-keystone keystone
Changed in mos:
milestone: none → 8.0-updates
Changed in mos:
assignee: nobody → MOS Keystone (mos-keystone)
importance: Undecided → High
description: updated
Revision history for this message
Bug Checker Bot (bug-checker) wrote : Autochecker

(This check performed automatically)
Please, make sure that bug description contains the following sections filled in with the appropriate data related to the bug you are describing:

actual result

expected result

steps to reproduce

For more detailed information on the contents of each of the listed sections see

tags: added: need-info
Revision history for this message
Alexander Petrov (apetrov-n) wrote :

I have performed the simple test. Keystoneclient generates the concurrent queries from the one solely user (admin). It is only connection and no other operation. In Rally world this scenario looks like this:
 "KeystoneAPI.test_01": [
      "runner": {
        "type": "constant",
        "concurrency": 600,
        "times": 4000

3 controllers, 3 compute+storage
OpenStack Release: Mitaka on Ubuntu 14.04
Compute: QEMU
Network: Neutron with VLAN segmentation
Storage Backends:
   Ceph RBD for volumes (Cinder)
   Ceph RadosGW for objects (Swift API)
   Ceph RBD for ephemeral volumes (Nova)
   Ceph RBD for images (Glance)
Capacity: CPU (Cores): 3 (3) RAM 8.9 GB HDD 450.0 GB

The results of the test is shown here:

In short words, I see that KS is not able to service (in that configuration) more ~500 connections.
I don’t know maybe it’s normal behavior for KS for such workload?
if some botlleneck exists then we should figure out where is the problem.

tags: added: ct1
Revision history for this message
Leontiy Istomin (listomin) wrote :

It seems it's the same issue as described here
The root cause is rsyslog configuration -
Please make sure that you have this fix applied

Revision history for this message
Dina Belova (dbelova) wrote :

Alexander, due to the comment it looks like you need to try to apply the needed fix on 8.0 (the fix was developed for 9.0). Please contact Maintenance team if bug can be fixed in 8.0 as well.

Revision history for this message
Dina Belova (dbelova) wrote :

One more moment - is it possible for you to reproduce the same issue on 9.0 ISO with fix included?

Revision history for this message
Dina Belova (dbelova) wrote :

Marking Invalid for 9.0 (at least for now) due to the fact, that there is a great confidence from Keystone team that should make this issue gone in MOS 9.0. Alex will recheck it additionally against 9.0 after June 14th. Also he'll communicate with maintenance team if this can be backported to 8.0 as well.

Revision history for this message
Dina Belova (dbelova) wrote :

Due to my previous comment, I'm marking this bug as confirmed (unless we'll prove it's not) for 8.0 and invalid due to the same fix in 10.0

Revision history for this message
Rodion Tikunov (rtikunov) wrote :

Code of 8.0 doesn't have a lines which should be deleted by patch [0].
As per [1] it is dangerous to backport other patches.


To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.