Serious memory consumption by neutron-server with DVR at scale
Bug #1497219 reported by
Ilya Shakhat
This bug affects 3 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Mirantis OpenStack |
Fix Released
|
High
|
Oleg Bondarev | ||
7.0.x |
Fix Released
|
Critical
|
Oleg Bondarev | ||
8.0.x |
Fix Released
|
High
|
Oleg Bondarev |
Bug Description
Upstream bug: https:/
Related upstream bug: https:/
Steps to reproduce:
0. The issue is noticeable at scale (100+ nodes), DVR should be turned on
1. Run rally scenario NeutronNetworks
Initially neutron-server processes consume 100-150M, but at some point the size rapidly increases in several times. (At 200 nodes the raise was from 150M to 2G, and upto 14G in the end).
The issue may lead to OOM situation causing kernel to kill the process with highest consumption. Usually candidates are rabbit or mysql.
tags: | added: neutron scale |
Changed in mos: | |
assignee: | nobody → Oleg Bondarev (obondarev) |
milestone: | none → 7.0-updates |
description: | updated |
description: | updated |
tags: | added: 70mu1-confirmed |
tags: | removed: 70mu1-confirmed |
tags: |
added: area-neutron removed: neutron |
To post a comment you must log in.
strace analysis shows that the issue is caused by a single SQL query:
SELECT routerports. router_ id AS routerports_ router_ id, routerports.port_id AS routerports_ port_id, routerports. port_type AS routerports_ port_type, ipallocations_ 1.port_ id AS ipallo 1.ip_address AS ipallocations_ 1_ip_address, ipallocations_ 1.subnet_ id AS ipallocations_ 1_subnet_ id, ipallocations_ 1.network_ id AS ipallocations_ 1_network_ id, ports_1.tenant
cations_1_port_id, ipallocations_
_id AS por....
which has list of parameters with ~1000 items
the response size is about 2G, after reading the response process starts to call mmap() retrieving more memory.
overall the processing takes several minutes.
Neutron function that matches query pattern is l3_db.get_ sync_interfaces ()