> 1. I don't think we ever saw this happening, can you elaborate?
When the state of the kubernetes node is ready and can be scheduled, and the state of the ovs-agent on this node is abnormal or unavailable. In this case, a new pod creation can be scheduled to the node, but the kuryr-controller creates and binds the neutron-port to the node and the neutron-port fails to bind.In this case if the kuryr-controller is restarted you will see an error message in the kuryr-controller's log.
The version of kuryr deployed in my environment is older and does not yet have the _cleanup_removed_nodes interface. I will try to update to the new kuryr code and try again.
Thank you for your reply
> 1. I don't think we ever saw this happening, can you elaborate?
When the state of the kubernetes node is ready and can be scheduled, and the state of the ovs-agent on this node is abnormal or unavailable. In this case, a new pod creation can be scheduled to the node, but the kuryr-controller creates and binds the neutron-port to the node and the neutron-port fails to bind.In this case if the kuryr-controller is restarted you will see an error message in the kuryr-controller's log.
2 We have this part of code that's being run periodically: https:/ /github. com/openstack/ kuryr-kubernete s/blob/ master/ kuryr_kubernete s/controller/ drivers/ vif_pool. py#L482. Do you think it doesn't work? Do you use nested or neutron VIF driver?
The version of kuryr deployed in my environment is older and does not yet have the _cleanup_ removed_ nodes interface. I will try to update to the new kuryr code and try again.