Gateway_less_Fwd: When IP-Fabric VN is configured as provider network over vn1 and vn2, with out policy routes of other compute node VMs getting leaked between vn1 and vn2
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Juniper Openstack | Status tracked in Trunk | |||||
Trunk |
Fix Committed
|
High
|
Hari Prasad Killi |
Bug Description
When IP-Fabric VN is configured as provider network over vn1 and vn2, with out policy routes of other compute node VMs getting leaked between vns.
Able to ping VM1 (vn1) from one compute to VM2(vn2) in another compute.
Build
------
R4.1.0.0 Build 23 Ubuntu 14.04 Mitaka
Topology
—————
Control/
Compute nodes : nodek11, nodec23 and nodeb3
Steps
———--
1. Create a vn1 (10.10.10.0/24) and vn2 (20.20.20.0/24) and configure IP Fabric network as provider network over both vn1 and vn2
2. Now, launch couple of VMs on both the VNs across compute nodes. (say VM1 (compute1) : 10.10.10.3/24, VM2 (compute1): 20.20.20.3/24, VM3 (compute2) : 10.10.10.4/24, VM4 (compute2): 20.20.20.4/24 )
3. Now, ping VM4 from VM1 is successful with out any policy between vns. Similar case on another compute also.
Below is the flow:
root@nodek11:~# flow --match 20.20.20.4
Flow table(size 80609280, entries 629760)
Entries: Created 59 Added 59 Deleted 72 Changed 76 Processed 59 Used Overflow entries 0
(Created Flows/CPU: 3 3 3 3 3 3 2 9 0 1 0 0 0 0 0 0 4 6 5 3 4 2 5 0 0 0 0 0 0 0 0 0)(oflows 0)
Action:F=Forward, D=Drop N=NAT(S=SNAT, D=DNAT, Ps=SPAT, Pd=DPAT, L=Link Local Port)
Other:
Flags:E=Evicted, Ec=Evict Candidate, N=New Flow, M=Modified Dm=Delete Marked
TCP(r=reverse)
Listing flows matching ([20.20.20.4]:*)
Index Source:
-------
126504<=>316740 10.10.10.3:28929 1 (0)
(Gen: 1, K(nh):38, Action:F, Flags:, QOS:-1, S(nh):38, Stats:68/6664,
SPort 65478, TTL 0, Sinfo 8.0.0.0)
316740<=>126504 20.20.20.4:28929 1 (0)
(Gen: 1, K(nh):38, Action:F, Flags:, QOS:-1, S(nh):14, Stats:68/5712,
SPort 55678, TTL 0, Sinfo 0.0.0.0)
root@nodek11:~#
summary: |
Gateway_less_Fwd: When IP-Fabric VN is configured as provider network - over vn1 and vn2, ping from VM in vn1 to VM in vn2 fails when both the - VMs are in same compute + over vn1 and vn2, with out policy routes of other compute node VMs + getting leaked between vn1 and vn2 |
description: | updated |
description: | updated |
information type: | Proprietary → Public |
Review in progress for https:/ /review. opencontrail. org/35262
Submitter: Naveen N (<email address hidden>)