VXLAN is not a Data Center Interconnect technology

In a comment to the Firewalls in a Small Private Cloud blog post I wrote “VXLAN is _NOT_ a viable inter-DC solution” and Jason wasn’t exactly happy with my blanket response. I hope Jason got a detailed answer in the VXLAN Technical Deep Dive webinar, here’s a somewhat shorter explanation.

VXLAN is a layer-2 technology. If you plan to use VXLAN to implement a data center interconnect, you’ll be stretching a single L2 segment across two data centers.

You probably know my opinion about the usability of L2 DCI, but even ignoring the obvious problems, current VXLAN implementations don’t have the features one would want to see in a L2 DCI solution.

What should a L2 DCI solution have?

Assuming someone forced you to implement a L2 DCI, the technology you plan to use SHOULD have these features:

  • Per-VLAN flooding control at data center edge. Broadcasts/multicasts are usually not rate-limited within the data center, but should be tightly controlled at the data center edge (bandwidth between data centers is usually orders of magnitude lower than bandwidth within a data center). Ideally, you’d be able to control them per VLAN to reduce the noisy neighbor problems.
  • Broadcast reduction at data center edge. Devices linking DC fabric to WAN core should implement features like ARP proxy.
  • Controlled unicast flooding. It should be possible to disable flooding of unknown unicasts at DC-WAN boundary.

It’s also nice to have the following features to reduce the traffic trombones going across the DCI link:

  • First hop router localization. Inter-subnet traffic should not traverse the DCI link to reach the first-hop router.
  • Ingress traffic optimization. Traffic sent to a server in one data center should not arrive to the other data center first.

OTV in combination with FHRP localization and LISP (or load balancers with Route Health Injection) gives you an almost ideal (OK, make it the least dreadful) solution. VXLAN with hypervisor VTEPs has none of the above-mentioned features.

VXLAN gateway on Arista’s 7150 is somewhat better, so you might be tempted to use it as solution that would connect two VLANs across an IP network, but don’t forget that they haven’t solved the redundancy issues yet – you can have a single switch acting as a VXLAN gateway for a particular VLAN.

Conclusion: The current VXLAN implementations (as of November 2012) are a far cry from what I would like to see if being forced to implement a L2 DCI solution. Stick with OTV (it’s now available on ASR 1K).

More information

VXLAN is mentioned in the Introduction to Virtual Networking webinar and described in details in the VXLAN Technical Deep Dive webinar. You’ll find some VXLAN use cases in Cloud Computing Networking webinar. All three webinars are available with the yearly subscription ... and if you need design help/review or a second opinion, check the ExpertExpress service.

11 comments:

  1. Hi Ivan :)

    I don't deny OTV/LISP is a nice solution. It's a great solution.

    But it comes down to many factors such as time to deploy, cost, size of L2 domain, does this L2 domain even need an L3 gateway, traffic types, connectivity between sites, and the list can keep going.

    My hope is OTV becomes available on the Cloud Services Router based on ASR 1K IOS XE. Doubt it, but one can hope.

    I look forward to hearing some case studies soon regarding VXLAN intra or inter-DC ;)

    -Jason

    Replies
    1. Jason, regardless of the factors you mention - when the L2 DCI link fails, you have a split subnet on your hands.

      As for OTV on CSR: http://blog.ioshints.info/2012/03/stretched-layer-2-subnets-server.html
  2. Ivan, thank you for the informative article.

    I would point out three advantages of VXLAN compared with OTV:

    1. Multi-vendor support. OTV is a single-vendor solution, and while an internet-draft has been published, it is actually incompatible with the N7k OTV implementation (a completely different encap).

    2. Virtualize at the edge. OTV pushes the 2-over-3 encap to the data center core (N7k), meaning there's a big layer 2 network within the DC, with all of the problems that brings (4k vlan limit, mac address table limitations, spanning tree, etc). With VXLAN, there are cost-effective edge implementations that free you from all of these constraints (16M layer-2 domains, no worries about overflowing the MAC table in your core switches, taking advantage of L3 ECMP end-to-end, etc).

    3. VMware interoperability. VSphere 5.1 ships with VXLAN support in the hypervisor. VXLAN is coming to Xen/KVM as well.

    Most of the limitations you describe will be addressed in the next year or so. Due to its multi-vendor nature, VXLAN is the clear long-term solution (think ISL versus 802.1q). So even if I were deploying today, I should think I would go VXLAN and bet on the roadmap rather than tie my network architecture to a single-vendor solution despite some short-term advantages.

    Kenneth Duda
    CTO & SVP Software
    Arista Networks, Inc.
    Replies
    1. Ken, thanks for excellent feedback.

      As I wrote, _current_ VXLAN implementations are not what I'd expect to see in robust L2 DCI designs (ignoring for the moment my personal opinion that L2 DCI is usually not based on actual business requirements but inability of the various IT teams to talk to each other).

      I know there are numerous vendors working on VXLAN extensions and additions, and I will be more than glad to report on the progress as actual products or software ship. Please keep me in the loop.

      Kind regards,
      Ivan
    2. Hi Ken,

      You mention VXLAN is the clear long term solution and many of the issues will be addressed in the next year or so. Can you give some insight into the Arista roadmap with VXLAN - will we see a control plane? :)

      Thanks,
      Jason
    3. No disagreement Ivan, and I look forward to connecting on this topic at the right time.

      Jason, all I can say right now is that we are well aware of the limitations and are hard at work on solutions. Sorry to be a bit vague but there are a lot of moving parts.

      -Ken
    4. WRT multivendor support why isn't EVPN even being mentioned? It runs on what you probably have deployed today in the WAN, has multivendor support, and should be out the same time if not sooner than reasonable VXLAN DCI options.
    5. This blog post isn't an overview of L2 DCI solutions, but my opinion on why VXLAN isn't one of them ;)
    6. Fair. It was more to the commenter's than the blog ;) Great stuff as always-Thanks!
  3. I think Ivan's seminal premise prevails: VXLAN is not a DCI technology (not in its current form).

    VXLAN was designed originally to enable the creation of a large number of virtual segments in the vein of the virtual app that had virtual nodes, but no good virtual segment service. With that goal in mind, the group opted for an over simplistic approach to the creation of the VXLAN overlay. One that does not satisfy the DCI requirements, but has the potential to provide a large number of virtual segments to the orchestrator creating virtual apps. I say has the potential to deliver this, because none of the implementations out there today actually deliver this large number of segments. Standard or proprietary, it is what it is, not more not less, definitely not a DCI tool.

    Enhancing VXLAN to become a full fledged L2 VPN service is a potential future and once we are done adding the necessary enhancements, we will have indeed completed a similar engineering cycle to that which we went over in 2007 for OTV. In the mean time (while we re-invent the wheel), there is existing, matured, widely deployed and hardened technology that is designed with DCI requirements in mind and can be deployed today.

    Regarding the points Ken brings:

    Multi-vendor: The encapsulation currently in use is in place to satisfy a rapid market. Cisco has consistently volunteered its IP in the interest of fostering interoperability. A quick scan at the drafts will show that the encapsulation being used in VXLAN is borrowed from earlier OTV and LISP drafts and is indeed what we are standardizing on for IP overlays: LISP, OTV and VXLAN. This is the vision we've had for a long time and we are glad that the industry is following. All that said, the encapsulation is the least interesting element in all of this. The meat is in the control plane and that is what is not defined in VXLAN as of today and I have not witnessed any open standards discussion on that front.

    Virtualize on the edge: The important item is not so much where the tunnel end-point resides, it is in how the overlay is realized. A flood based overlay like VXLAN pushed across data centers means any flood activity in any DC is seen everywhere (think ubiquitous arp storms). This negates to a large extent the resiliency benefits of having more than one facility. So creating an L3 network, just to overlay a flat virtual flood domain onto it, is not a healthy practice for the interconnection of DCs. I think this was clearly articulated by Ivan. If desired, the OTV edge functionality could be pushed to the access ports, but there are much better ways to design a data center fabric than to simply flatten it out.

    VMware interoperability: That is where we use VXLAN and why Cisco supports VXLAN extensively. Actually Cisco is the first vendor to have released VXLAN in shipping product and we have a solid committed roadmap for it. But that doesn't mean that the DCI functionality needs to be bundled into this portion of the orchestration, nor does it mean that everything you did in networking until now is obsoleted by the introduction of VXLAN. Each task has its right tool. VXLAN may be a nice hammer, but I am still not going to hammer in my screws.

    BTW. As of today, both VXLAN and OTV are simply individual contributions and not adopted by any WG at the IETF. So neither one is even remotely close to being a standard.
  4. Lot's of time has passed (in the context of IT) and I reckon it would be really interesting to read current views on this topic. I am in the process of forming my personal view and I am looking for more broad discussions.

    I like this conversation as it has credible contributors and seems to be a balanced and mature conversation.

    Anyone interested in posting some current views?
Add comment
Sidebar