2015-08-12 07:19:20 |
wangxiyuan |
description |
Now, when get the volume without limit, cinder will get all volume information from db first. Then get the max limit from CONF and filter it. It will waste much time to do it.
For example, if cinder has more than ten thousand volumes and max limit is one thousand. Cinder will get the ten thousand ones, then filter it. The remaining nine thousand volume is useless but have already occupied memory and time.
So I think a better way is to get the one thousand volume directly from db.
offset could be passed to db layer as well. Do indexing only once in db in enough and efficient. |
Now, when get the volume without limit, cinder will get all volume information from db first. Then get the max limit from CONF and filter it. It will waste much time to do it.
For example, if cinder has more than ten thousand volumes and max limit is one thousand. Cinder will get the ten thousand ones, then filter it. The remaining nine thousand volume is useless but have already occupied memory and time.
So I think a better way is to get the one thousand volume directly from db.
offset could be passed to db layer as well. Do indexing only once in db in enough and efficient. |
|
2015-08-12 07:23:16 |
wangxiyuan |
description |
Now, when get the volume without limit, cinder will get all volume information from db first. Then get the max limit from CONF and filter it. It will waste much time to do it.
For example, if cinder has more than ten thousand volumes and max limit is one thousand. Cinder will get the ten thousand ones, then filter it. The remaining nine thousand volume is useless but have already occupied memory and time.
So I think a better way is to get the one thousand volume directly from db.
offset could be passed to db layer as well. Do indexing only once in db in enough and efficient. |
Now, when get the volume without limit, cinder will get all volume information from db first. Then get the max limit from CONF and filter it. It will waste much time to do it.
For example, if cinder has more than ten thousand volumes and max limit is one thousand. Cinder will get the ten thousand ones, then filter it. The remaining nine thousand volume is useless but have already occupied memory and time.
So I think a better way is to get the one thousand volume directly from db.
offset could be passed to db layer as well. Do indexing only once in db in enough and efficient.
Here are some test data:
Env: There are 60,000 volumes items, 370,000 volume_metadata items and
240,000 volume_glance_metadata items in cinder db.
Old volume index will use nearly 10G memory.
The new one just use about 500M memory. |
|