Neutron Metadata Agents backlog value too low and differs from Neutron default
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack-Ansible |
Fix Released
|
Undecided
|
Ian Cordasco | ||
Kilo |
Fix Committed
|
Undecided
|
Ian Cordasco | ||
Liberty |
Fix Committed
|
Undecided
|
Ian Cordasco | ||
Mitaka |
Fix Committed
|
Undecided
|
Ian Cordasco | ||
Trunk |
Fix Released
|
Undecided
|
Ian Cordasco |
Bug Description
Neutron Metadata Agents works roughly like this: The service runs on a Unix Domain Socket. It runs as many threads as cpu_count/2. Each thread has a maximum number of connections that it can accept. The service attempts to talk to Neutron API via RPC but (only on) Kilo and Liberty will fall back to HTTP if necessary.
By default (since Kilo) Neutron has that number set to 4096. Since kilo in OpenStack Ansible the number has been 128.
At high throughput on a large OpenStack Ansible cloud, keeping this value arbitrarily low (1/32nd of it's default value) causes RPC connections from Neutron Metadata Agent to fail and fallback to HTTP. Setting this value to even half of the upstream default (on only 2 of the 5 metadata agents containers) causes this fallback to stop happening.
OpenStack Ansible Master: https:/
Neutron Master: https:/
OpenStack Ansible Mitaka: https:/
Neutron Mitaka: https:/
OpenStack Ansible Liberty: https:/
Neutron Liberty: https:/
OpenStack Ansible Kilo: https:/
Neutron Kilo: https:/
description: | updated |
Fix proposed to branch: master /review. openstack. org/328441
Review: https:/