29th December 2015 Running VLAN, VXLAN and GRE Together Using Neutron & Openstack
29th December 2015 Running VLAN, VXLAN and GRE Together Using Neutron & Openstack
29th December 2015 Running VLAN, VXLAN and GRE Together Using Neutron & Openstack
[http://2.bp.blogspot.com/-
sMhvgfoxNPM/VoGmRC0IblI/AAAAAAAA0t0/kuQWdljFUrk/s1600/Private%2BCloud%2BNetworking.png]
http://blog.arunsriraman.com/2015/12/running-vlan-vxlan-and-gre-together.html 1/4
9/27/2018 Running VLAN, VXLAN and GRE together using Neutron & Openstack | Tech Pensieve - All stuff virtualization, cloud and technology
What you see above are three servers - one network node and two compute
nodes.
First we'll go through the design outlined here and further on we can discuss
other possibilities on a case-by-case basis.
Functionality:
http://blog.arunsriraman.com/2015/12/running-vlan-vxlan-and-gre-together.html 2/4
9/27/2018 Running VLAN, VXLAN and GRE together using Neutron & Openstack | Tech Pensieve - All stuff virtualization, cloud and technology
Functionality:
The data interface carrying VLAN traffic needs to be trunked all the way
between the servers and the switches that form this cloud network. The
reason we do this is to allow Virtual Machines on different VLAN's to be able
to communicate and talk on the same interface of the hypervisor. VLAN
tagging and un-tagging is done by the Integration-switch. The switches
are connected to each other by virtual patch cables (eg: intergration-switch
and data-switch).
The tunnel interface carrying tunnel traffic i.e. GRE/VXLAN can either sit on
a switch or just remain an interface. In my case I have put it in the tunnel-
switch. Now you can definitely have the tunnel traffic & VLAN traffic use the
same interface - To do this you simply have to use the data-switch and not
have a separate interface/bridge for tunnel traffic. This is possible and I have
seen people do this too. Using a single interface for both tunneling (overlays)
and VLAN's reduces the number of NIC's required per server.
Note: Big enterprises use bonded interfaces for high availability and link
aggregation. In this case there would be more than one ethernet interface
"bonded" together into a linux bond interface. In that case, the
http://blog.arunsriraman.com/2015/12/running-vlan-vxlan-and-gre-together.html 3/4
9/27/2018 Running VLAN, VXLAN and GRE together using Neutron & Openstack | Tech Pensieve - All stuff virtualization, cloud and technology
architecture diagram above will still hold good but with bond0 or bond1
interface added to the bridge instead of the physical ethernet interfaces eth0
and eth1 shown above.
IP Addresses:
- 1 IP for the management interface.
- 1 IP for the external traffic on the external-switch (only on network node)
- 1 IP for the VLAN data traffic on the data-switch (Not required! Optional)
- 1 IP for the tunnel traffic on the tunnel interface or a tunnel switch.
Finally, do keep in mind that the network node is definitely a single point of
failure in this design but this can be mitigated by using a active-standby
setup (having multiple network nodes) or by going one step further and
moving out the functionality of the network node to the compute nodes. I'll
talk about how to set these interfaces up in a separate article and also
describe the network node internals and how to debug them in another one.
Posted 29th December 2015 by Arun
Labels: network design, neutron, openstack, overlays, SDN, server requirements, vlan
0 Add a comment
Publish Preview
http://blog.arunsriraman.com/2015/12/running-vlan-vxlan-and-gre-together.html 4/4