AWS: Can only open a single nodeport on the cluster

Bug #1842290 reported by Pedro Guimarães
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
AWS Integrator Charm
Incomplete
Undecided
Unassigned
Kubernetes Worker Charm
Incomplete
Undecided
Unassigned

Bug Description

Deploying on top of AWS using aws-integrator.

I am seeing a behavior similar to described on: https://github.com/kubernetes/kubernetes/issues/39214
Where only one NodePort can exist per cluster. Seems this is due to the fact k8s does not know if ELBs continue to exist.

On my deployment, I can expose kubernetes-dashboard with:
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  type: LoadBalancer
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - protocol: TCP
    port: 443
    targetPort: 8443

That boots a ELB and configures NodePort correctly.
I need to run open-port on kubernetes-workers as per: https://bugs.launchpad.net/charm-kubernetes-worker/+bug/1842104

Then, I try to create a second service + ELB:
apiVersion: v1
kind: Service
metadata:
  name: tensorboard
  namespace: kubeflow
  labels:
    k8s-app: tensorboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  type: LoadBalancer
  selector:
    k8s-app: tensorboard
  ports:
  - protocol: TCP
    port: 6006
    targetPort: 6006

I see the ELB booting, however k8s do not allocate NodePorts this time.
netstat -tnlp returns empty for the allocated NodePort on kubernetes-workers (first nodeport is there though)

What intrigues me is that solution described on: https://github.com/kubernetes/kubernetes/issues/39214#issuecomment-269236544
Seems not to be working for me. I am using aws-integrator and trusting it (juju trust) with same credentials as Juju's, which should be more than enough to operate ELBs.

Eventually ELB dies out and disappears from my console.

Revision history for this message
Pedro Guimarães (pguimaraes) wrote :
Revision history for this message
George Kraft (cynerva) wrote :

I can't reproduce this. We need more info.

What charm revisions do you see this on?

Can you confirm that the tensorboard pod(s) have the `k8s-app: tensorboard` label?

Can you attach log output from kube-controller-manager? Specifically, we need output of `journalctl -o cat -u snap.kube-controller-manager.daemon` from each kubernetes-master unit.

Changed in charm-aws-integrator:
status: New → Incomplete
Changed in charm-kubernetes-worker:
status: New → Incomplete
Revision history for this message
George Kraft (cynerva) wrote :

Also log output from kube-proxy: output of `journalctl -o cat -u snap.kube-proxy.daemon` on each kubernetes-worker unit.

Revision history for this message
George Kraft (cynerva) wrote :

Also output of `kubectl describe svc` for the service that is not working as expected.

summary: - Can only open a single nodeport on the cluster
+ AWS: Can only open a single nodeport on the cluster
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.