AWS: Can only open a single nodeport on the cluster
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
AWS Integrator Charm |
Incomplete
|
Undecided
|
Unassigned | ||
Kubernetes Worker Charm |
Incomplete
|
Undecided
|
Unassigned |
Bug Description
Deploying on top of AWS using aws-integrator.
I am seeing a behavior similar to described on: https:/
Where only one NodePort can exist per cluster. Seems this is due to the fact k8s does not know if ELBs continue to exist.
On my deployment, I can expose kubernetes-
apiVersion: v1
kind: Service
metadata:
name: kubernetes-
namespace: kube-system
labels:
k8s-app: kubernetes-
kubernetes.
addonmanage
spec:
type: LoadBalancer
selector:
k8s-app: kubernetes-
ports:
- protocol: TCP
port: 443
targetPort: 8443
That boots a ELB and configures NodePort correctly.
I need to run open-port on kubernetes-workers as per: https:/
Then, I try to create a second service + ELB:
apiVersion: v1
kind: Service
metadata:
name: tensorboard
namespace: kubeflow
labels:
k8s-app: tensorboard
kubernetes.
addonmanage
spec:
type: LoadBalancer
selector:
k8s-app: tensorboard
ports:
- protocol: TCP
port: 6006
targetPort: 6006
I see the ELB booting, however k8s do not allocate NodePorts this time.
netstat -tnlp returns empty for the allocated NodePort on kubernetes-workers (first nodeport is there though)
What intrigues me is that solution described on: https:/
Seems not to be working for me. I am using aws-integrator and trusting it (juju trust) with same credentials as Juju's, which should be more than enough to operate ELBs.
Eventually ELB dies out and disappears from my console.
Bundle: https:/ /pastebin. canonical. com/p/3Ppq8JXvC 6/