[1.29+] test_service_cidr_expansion leaves pods in CrashLoopBackOff
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Charmed Kubernetes Testing |
New
|
Undecided
|
Unassigned | ||
Kubernetes Control Plane Charm |
New
|
Undecided
|
Unassigned |
Bug Description
After running test_service_
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
calico-
calico-node-4sl6s 1/1 Running 0 3h56m
calico-node-lkldc 1/1 Running 2 (3h52m ago) 3h55m
calico-node-wswdt 1/1 Running 3 (3h51m ago) 3h52m
calico-node-xtdnf 1/1 Running 3 (3h44m ago) 3h46m
calico-node-zh68f 1/1 Running 2 (3h52m ago) 3h53m
coredns-
kube-state-
metrics-
$ kubectl logs -n kube-system calico-
...
2023-12-05 21:20:56.222 [ERROR][1] main.go 297: Received bad status code from apiserver error=Get "https:/
2023-12-05 21:20:56.222 [INFO][1] main.go 313: Health check is not ready, retrying in 2 seconds with new timeout: 16s
The service CIDR was expanded. The kubernetes-
The old reactive code handled service cidr expansions by deleting the kubernetes service[1] and restarting impacted services[2]. The new ops code does not do this.
[1]: https:/
[2]: https:/