/v3/users is unproportionally slow

Bug #1689888 reported by Sam Morrison
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Identity (keystone)
Confirmed
Medium
Kristi Nikolla

Bug Description

We have 11,000 users, doing a `client.users.list()` takes around 14-20 seconds

We have 14,000 projects and doing a `client.projects.list()` takes around 7-10 seconds.

So you can see we have more projects however it takes about double the time to list users.

I should mention we are using mitaka and our keystone is using apache

Tags: performance
Revision history for this message
Lance Bragstad (lbragstad) wrote :

Hi Sam,

Would you be able to share your caching configuration? There were several things we cleaned up in the caching implementing around the Mitaka timeframe.

tags: added: performance
Revision history for this message
Lance Bragstad (lbragstad) wrote :

I was able to recreate this locally, but I'm still tinkering with some caching settings so I can provide benchmarks with and without caching.

Revision history for this message
Lance Bragstad (lbragstad) wrote :

Ok - after doing some additional research I've confirmed the issue. I setup keystone locally and installed master (239bc3627cfb0546148e9d496f9e1536057052a7). I used a script to generate a bunch of data [0]. I ended up putting 10,000 project and 15,000 users in the system. Granted, I'm doing everything on my laptop (keystone + running with uwsgi, mysql, client, etc). My numbers won't be good representations of a real deployment, but I was able to notice a delta in the two APIs with respect to response time.

I create a script to time GET /v3/users and GET /v3/projects [1]. An additional script timed GET /v3/users/{user_id}, just based on some arbitrary user in the system [2].

I published the results without caching configured [3] and with caching configured [4].

Listing projects typically took 1 second while listing all users took between 3 and 4 seconds, on average. While there are slightly more users in the system, it does seem unproportional. I think additional investigation can be done to see where that is taking place. If it's somewhere in the identity controller or manager, we might be able to optimize python in that area (maybe we're doing unnecessary looping).

As far as the caching goes, you'll notice the listing of projects and users didn't differ when caching was enabled. It turns out we don't actually cache these sets. Keystone caches based on arguments, which is why you see the results of GET /v3/users/{user_id} improve after the first few calls (it goes from ~0.42 seconds to ~0.27 seconds).

While we might not be able to get caching implemented on those listing calls, I think we can certainly look at how we can optimize GET /v3/users/ and see if we can improve it's timing on large sets of data.

[0] https://gist.github.com/lbragstad/a4592f5fd52af1b0ad2b5f1fa57fb9ca#file-populate-py
[1] https://gist.github.com/lbragstad/a4592f5fd52af1b0ad2b5f1fa57fb9ca#file-time_list_projects_and_users-py
[2] https://gist.github.com/lbragstad/a4592f5fd52af1b0ad2b5f1fa57fb9ca#file-time_get_user-py
[3] https://gist.github.com/lbragstad/a4592f5fd52af1b0ad2b5f1fa57fb9ca#file-results-without-caching
[4] https://gist.github.com/lbragstad/a4592f5fd52af1b0ad2b5f1fa57fb9ca#file-results-with-caching

Changed in keystone:
status: New → Confirmed
importance: Undecided → Medium
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to keystone (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/553880

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on keystone (master)

Change abandoned by Gage Hugo (<email address hidden>) on branch: master
Review: https://review.openstack.org/553880
Reason: Gonna split this up

Changed in keystone:
assignee: nobody → Kristi Nikolla (knikolla)
milestone: none → victoria-2
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.