After kernel upgrade, nf_conntrack_ipv4 module unloaded, no IP traffic to instances
| Affects | Status | Importance | Assigned to | Milestone | |
|---|---|---|---|---|---|
| OpenStack Neutron Open vSwitch Charm |
Fix Released
|
Low
|
Tiago Pasqualini da Silva | ||
| neutron |
Fix Released
|
Undecided
|
Brian Haley | ||
| linux (Ubuntu) |
Confirmed
|
Undecided
|
Unassigned | ||
Bug Description
With an environment running Xenial-Queens, and having just upgraded the linux-image-generic kernel for MDS patching, a few of our hypervisor hosts that were rebooted (3 out of 100) ended up dropping IP (tcp/udp) ingress traffic.
It turns out that nf_conntrack module was loaded, but nf_conntrack_ipv4 was not loading, and the traffic was being dropped by this rule:
table=72, n_packets=214989, priority=
The ct_state "inv" means invalid conntrack state, which the manpage describes as:
It appears that there may be an issue when patching the OS of a hypervisor not running instances may fail to update initrd to load nf_conntrack_ipv4 (and/or _ipv6).
I couldn't find anywhere in the charm code that this would be loaded unless the charm's "harden" option is used on nova-compute charm (see charmhelpers contrib/host templates). It is unset in our environment, so we are not using any special module probing.
Did nf_conntrack_ipv4 get split out from nf_conntrack in recent kernel upgrades or is it possible that the charm should define a modprobe file if we have the OVS firewall driver configured?
| Changed in charm-neutron-openvswitch: | |
| status: | Incomplete → Confirmed |
| Changed in linux (Ubuntu): | |
| status: | Incomplete → Confirmed |
| tags: | added: sts |
| Changed in charm-neutron-openvswitch: | |
| milestone: | none → 19.10 |
| Changed in charm-neutron-openvswitch: | |
| status: | Fix Committed → Fix Released |
| Changed in neutron: | |
| assignee: | nobody → Brian Haley (brian-haley) |
| tags: | added: kernel-daily-bug |

You're correct in that the charm does not do any module loading; that's handled by neutron.