On GCE (Juju 2.3.5) when requesting the private IP address of a unit, I always get the FAN IP address.

Bug #1761838 reported by Gregory Van Seghbroeck
34
This bug affects 5 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
High
Unassigned

Bug Description

First of all, some context.
I'm using Juju 2.3.5 on GCE and discovered this behavior while deploying the Kibana and ElasticSearch charms of omnivector. Although, I don't think the charms matter. Both charms are deployed on separate Google VMs, so no containers involved. And to be 100% clear, I haven't tried it with an AWS controller.

I noticed from the Kibana UI, it couldn't actually reach the ElasticSearch unit. While debugging the process, I noticed Fan was deployed on both VMs, although I'm not using any LXD containers. And both charms used the Fan IP address as their private IP addresses. That's why during the relation conversation the ElasticSearch charm passed a wrong IP address to Kibana.

Some commands I tried during my quest.

- on my Juju client:
```
juju run --unit elasticsearch/0 'network-get public --ingress-address --format yaml'
252.1.240.1
```
- in debug-hooks on my Kibana unit, and in python3 shell:
```
>>> unit_get('private-address')
'252.1.224.1'
>>> network_get('public')
{'ingress-addresses': ['252.1.224.1'], 'bind-addresses': [{'macaddress': '62:b6:9e:a6:c7:8a', 'interfacename': 'fan-252', 'addresses': [{'cidr': '252.0.0.0/8', 'address': '252.1.224.1'}]}]}
```

When you need more information, always happy to provide it.

Thanks,
Gregory

Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1761838] [NEW] On GCE (Juju 2.3.5) when requesting the private IP address of a unit, I always get the FAN IP address.

I don't believe in GCE we automatically configure a Fan config. However,
due to the way the fan works, if you would run containers then you need to
be advertising the fan addresses even for processes that are not in
containers. (it is how the fan determines whether it can use direct
addressing or needs to NAT packets.)

How is your networking configured such that the host machines can reach
each other but not reach each other on FAN addresses? (are they on
different networks/subnetworks/etc?)

I'm pretty sure this means you configured fan-config. Are you planning on
using it, just playing with it to experiment, using containers but just not
for these applications, ?

John
=:->

On Fri, Apr 6, 2018, 22:50 Gregory Van Seghbroeck <
<email address hidden>> wrote:

> Public bug reported:
>
> First of all, some context.
> I'm using Juju 2.3.5 on GCE and discovered this behavior while deploying
> the Kibana and ElasticSearch charms of omnivector. Although, I don't think
> the charms matter. Both charms are deployed on separate Google VMs, so no
> containers involved. And to be 100% clear, I haven't tried it with an AWS
> controller.
>
> I noticed from the Kibana UI, it couldn't actually reach the
> ElasticSearch unit. While debugging the process, I noticed Fan was
> deployed on both VMs, although I'm not using any LXD containers. And
> both charms used the Fan IP address as their private IP addresses.
> That's why during the relation conversation the ElasticSearch charm
> passed a wrong IP address to Kibana.
>
> Some commands I tried during my quest.
>
> - on my Juju client:
> ```
> juju run --unit elasticsearch/0 'network-get public --ingress-address
> --format yaml'
> 252.1.240.1
> ```
> - in debug-hooks on my Kibana unit, and in python3 shell:
> ```
> >>> unit_get('private-address')
> '252.1.224.1'
> >>> network_get('public')
> {'ingress-addresses': ['252.1.224.1'], 'bind-addresses': [{'macaddress':
> '62:b6:9e:a6:c7:8a', 'interfacename': 'fan-252', 'addresses': [{'cidr': '
> 252.0.0.0/8', 'address': '252.1.224.1'}]}]}
> ```
>
> When you need more information, always happy to provide it.
>
> Thanks,
> Gregory
>
> ** Affects: juju
> Importance: Undecided
> Status: New
>
> --
> You received this bug notification because you are subscribed to juju.
> Matching subscriptions: juju bugs
> https://bugs.launchpad.net/bugs/1761838
>
> Title:
> On GCE (Juju 2.3.5) when requesting the private IP address of a unit,
> I always get the FAN IP address.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1761838/+subscriptions
>

Revision history for this message
Gregory Van Seghbroeck (gvseghbr) wrote :
Download full text (5.0 KiB)

Hi John,

I didn't specifically took any action w.r.t. setting up Fan, it came for free. Even more, when bootstrapping a new GCE controller, I set model-defaults of fan-config="" and container-networking-method=local. However, to my surprise, these defaults always got ignored when adding a new model. Specifically setting the configs during model creation did the trick, but there we have another problem. I need to be able to do this from libjuju and as it looks like it, I cannot set empty configs during model creation. So back to square 0, but that's something for another feature request. Long story short, it looks like Fan comes all automatically.

For now I rather have it not set on my system, since I'm not planning to use LXD containers in these deployments. But I guess this is behavior not specific to the charms I use and at some point people (me included) will want to try Fan and have the correct info when requesting the private IP address.

Here's some output from ip commands:
```
ubuntu@juju-ae73d8-0:~$ ip -all address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000
    link/ether 42:01:0a:84:00:02 brd ff:ff:ff:ff:ff:ff
    inet 10.132.0.2/32 brd 10.132.0.2 scope global ens4
       valid_lft forever preferred_lft forever
    inet6 fe80::4001:aff:fe84:2/64 scope link
       valid_lft forever preferred_lft forever
3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether fe:01:27:c3:1d:33 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc01:27ff:fec3:1d33/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::1/64 scope link
       valid_lft forever preferred_lft forever
4: fan-252: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:f2:97:86:25:aa brd ff:ff:ff:ff:ff:ff
    inet 252.0.32.1/8 scope global fan-252
       valid_lft forever preferred_lft forever
    inet6 fe80::cf2:97ff:fe86:25aa/64 scope link
       valid_lft forever preferred_lft forever
5: ftun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue master fan-252 state UNKNOWN group default qlen 1000
    link/ether 0e:f2:97:86:25:aa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::cf2:97ff:fe86:25aa/64 scope link
       valid_lft forever preferred_lft forever
ubuntu@juju-ae73d8-0:~$ ip -all route
default via 10.132.0.1 dev ens4
10.132.0.1 dev ens4 scope link
252.0.0.0/8 dev fan-252 proto kernel scope link src 252.0.32.1
```

And from ifconfig, to be complete:
```
ubuntu@juju-ae73d8-0:~$ ifconfig
ens4 Link encap:Ethernet HWaddr 42:01:0a:84:00:02
          inet addr:10.132.0.2 Bcast:10.132.0.2 Mask:255.255.255.255
          inet6 addr: fe80::4001:aff:fe84:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1
          RX packets:322544 errors:0 dropped:0 overruns:0 frame:...

Read more...

Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1761838] Re: On GCE (Juju 2.3.5) when requesting the private IP address of a unit, I always get the FAN IP address.
Download full text (5.9 KiB)

You can set it with "juju model-defaults" instead of "juju model-config".
That should set the new configuration for the next model being created.

I'll dig into the rest when I get a chance.

On Sat, Apr 7, 2018 at 11:14 AM, Gregory Van Seghbroeck <
<email address hidden>> wrote:

> Hi John,
>
> I didn't specifically took any action w.r.t. setting up Fan, it came for
> free. Even more, when bootstrapping a new GCE controller, I set model-
> defaults of fan-config="" and container-networking-method=local.
> However, to my surprise, these defaults always got ignored when adding a
> new model. Specifically setting the configs during model creation did
> the trick, but there we have another problem. I need to be able to do
> this from libjuju and as it looks like it, I cannot set empty configs
> during model creation. So back to square 0, but that's something for
> another feature request. Long story short, it looks like Fan comes all
> automatically.
>
> For now I rather have it not set on my system, since I'm not planning to
> use LXD containers in these deployments. But I guess this is behavior
> not specific to the charms I use and at some point people (me included)
> will want to try Fan and have the correct info when requesting the
> private IP address.
>
> Here's some output from ip commands:
> ```
> ubuntu@juju-ae73d8-0:~$ ip -all address
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP
> group default qlen 1000
> link/ether 42:01:0a:84:00:02 brd ff:ff:ff:ff:ff:ff
> inet 10.132.0.2/32 brd 10.132.0.2 scope global ens4
> valid_lft forever preferred_lft forever
> inet6 fe80::4001:aff:fe84:2/64 scope link
> valid_lft forever preferred_lft forever
> 3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UNKNOWN group default qlen 1000
> link/ether fe:01:27:c3:1d:33 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc01:27ff:fec3:1d33/64 scope link
> valid_lft forever preferred_lft forever
> inet6 fe80::1/64 scope link
> valid_lft forever preferred_lft forever
> 4: fan-252: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue
> state UP group default qlen 1000
> link/ether 0e:f2:97:86:25:aa brd ff:ff:ff:ff:ff:ff
> inet 252.0.32.1/8 scope global fan-252
> valid_lft forever preferred_lft forever
> inet6 fe80::cf2:97ff:fe86:25aa/64 scope link
> valid_lft forever preferred_lft forever
> 5: ftun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1410 qdisc noqueue master
> fan-252 state UNKNOWN group default qlen 1000
> link/ether 0e:f2:97:86:25:aa brd ff:ff:ff:ff:ff:ff
> inet6 fe80::cf2:97ff:fe86:25aa/64 scope link
> valid_lft forever preferred_lft forever
> ubuntu@juju-ae73d8-0:~$ ip -all route
> default via 10.132.0.1 dev ens4
> 10.132.0.1 dev ens4 scope link
> 252.0.0.0/8 dev fan-252 proto kernel scope link ...

Read more...

Revision history for this message
John A Meinel (jameinel) wrote :

If model-defaults aren't being respected on new models, that is certainly an issue.

Changed in juju:
importance: Undecided → Medium
status: New → Triaged
importance: Medium → High
tags: added: fan gce-provider network
Ian Booth (wallyworld)
tags: added: network-get
Revision history for this message
Taihsiang Ho (taihsiangho) wrote :

This issue happens for me as well.

$ juju --version
2.5.0-xenial-amd64

Revision history for this message
Taihsiang Ho (taihsiangho) wrote :

juju model-config fan-config="" container-networking-method=""

works for me as a workaround.

---

$ juju --version
2.5.0-xenial-amd64

Revision history for this message
Erik Lönroth (erik-lonroth) wrote :

I get the problem too: Here is a discussion I opened on discourse.

This breaks deployments obviously for alot of charms dependent on this to work and is really messy.

https://discourse.juju.is/t/network-binding-service-with-google-cloud/3542/2

I'm using juju 2.8.1-focal-amd64 (snap)

Revision history for this message
Erik Lönroth (erik-lonroth) wrote :

Why not just add a new hook tool "unit-get fan-network" ?

Revision history for this message
Kevin W Monroe (kwmonroe) wrote :

Also affects juju 2.8.7-focal-amd64. Couple notes and a simplified use case; it seems the fan gets in the way of deployments that don't even leverage network-get:

$ juju add-model testu
Added 'testu' model on google/us-west1 with credential 'foo' for user 'admin'

$ juju deploy ubuntu -n 5
Located charm "cs:ubuntu-17".
Deploying charm "cs:ubuntu-17".

$ sleep 1200 # shouldn't take 20m to get 5 ubuntus deployed, but let's be generous

$ juju status
Model Controller Cloud/Region Version SLA Timestamp
testu google-us-west1 google/us-west1 2.8.7 unsupported 17:26:57-06:00

App Version Status Scale Charm Store Rev OS Notes
ubuntu 18.04 waiting 2/5 ubuntu jujucharms 17 ubuntu

Unit Workload Agent Machine Public address Ports Message
ubuntu/0* active idle 0 35.230.121.28 ready
ubuntu/1 waiting allocating 1 35.233.238.175 agent initializing
ubuntu/2 waiting allocating 2 35.197.52.126 agent initializing
ubuntu/3 waiting allocating 3 35.233.131.212 agent initializing
ubuntu/4 active idle 4 34.82.186.17 ready

Debug-log on a permanently-allocating unit looks like:

2021-02-04 23:10:30 INFO juju.agent.tools symlinks.go:20 ensure jujuc symlinks in /var/lib/juju/tools/unit-ubuntu-1
2021-02-04 23:10:30 INFO juju.agent.tools symlinks.go:40 was a symlink, now looking at /var/lib/juju/tools/2.8.7-bionic-amd64
2021-02-04 23:10:30 INFO juju.worker.uniter uniter.go:302 unit "ubuntu/1" started
2021-02-04 23:10:30 INFO juju.worker.uniter uniter.go:581 resuming charm install
2021-02-04 23:10:30 INFO juju.worker.uniter.charm bundles.go:79 downloading cs:ubuntu-17 from API server
2021-02-04 23:10:30 INFO juju.downloader download.go:111 downloading from cs:ubuntu-17
2021-02-04 23:26:23 INFO juju.worker.uniter uniter.go:286 unit "ubuntu/1" shutting down: preparing operation "install cs:ubuntu-17" for ubuntu/1: failed to download charm "cs:ubuntu-17" from API server: read tcp 252.0.135.1:44996->252.0.54.1:17070: read: connection reset by peer
2021-02-04 23:26:23 ERROR juju.worker.dependency engine.go:671 "uniter" manifold worker returned unexpected error: preparing operation "install cs:ubuntu-17" for ubuntu/1: failed to download charm "cs:ubuntu-17" from API server: read tcp 252.0.135.1:44996->252.0.54.1:17070: read: connection reset by peer

When re-deployed on a model with the workaround from comment #6 (fan-config="" container-networking-method=""), things are fine.

Revision history for this message
Erik Lönroth (erik-lonroth) wrote :

Any news on this?

Revision history for this message
Ian Booth (wallyworld) wrote :

Is this still an issue in Juju 2.9.22?
We don't plan on doing any more 2.8 or earlier releases. If it's an issue in 2.9 we can fix it.

Revision history for this message
Erik Lönroth (erik-lonroth) wrote :

Oh, I'll test this later then and come back.

Den mån 7 feb. 2022 01:05Ian Booth <email address hidden> skrev:

> Is this still an issue in Juju 2.9.22?
> We don't plan on doing any more 2.8 or earlier releases. If it's an issue
> in 2.9 we can fix it.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1761838
>
> Title:
> On GCE (Juju 2.3.5) when requesting the private IP address of a unit,
> I always get the FAN IP address.
>
> Status in juju:
> Triaged
>
> Bug description:
> First of all, some context.
> I'm using Juju 2.3.5 on GCE and discovered this behavior while deploying
> the Kibana and ElasticSearch charms of omnivector. Although, I don't think
> the charms matter. Both charms are deployed on separate Google VMs, so no
> containers involved. And to be 100% clear, I haven't tried it with an AWS
> controller.
>
> I noticed from the Kibana UI, it couldn't actually reach the
> ElasticSearch unit. While debugging the process, I noticed Fan was
> deployed on both VMs, although I'm not using any LXD containers. And
> both charms used the Fan IP address as their private IP addresses.
> That's why during the relation conversation the ElasticSearch charm
> passed a wrong IP address to Kibana.
>
> Some commands I tried during my quest.
>
> - on my Juju client:
> ```
> juju run --unit elasticsearch/0 'network-get public --ingress-address
> --format yaml'
> 252.1.240.1
> ```
> - in debug-hooks on my Kibana unit, and in python3 shell:
> ```
> >>> unit_get('private-address')
> '252.1.224.1'
> >>> network_get('public')
> {'ingress-addresses': ['252.1.224.1'], 'bind-addresses': [{'macaddress':
> '62:b6:9e:a6:c7:8a', 'interfacename': 'fan-252', 'addresses': [{'cidr': '
> 252.0.0.0/8', 'address': '252.1.224.1'}]}]}
> ```
>
> When you need more information, always happy to provide it.
>
> Thanks,
> Gregory
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju/+bug/1761838/+subscriptions
>
>

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.