Edge Protocol Independence: Another Benefit of Edge-and-Core Layering

I asked Martin Casado to check whether I correctly described his HotSDN’12 paper in my Edge and Core OpenFlow post, and he replied with another interesting observation:

The (somewhat nuanced) issue I would raise is that [...] decoupling [also] allows evolving the edge and core separately. Today, changing the edge addressing scheme requires a wholesale upgrade to the core.

The 6PE architecture (IPv6 on the edge, MPLS in the core) is a perfect example of this concept.

Why does it matter?

Traditional scalable network designs always have at least two layers: access or aggregation layer, where most of the network services are performed, and core layer, that provides high-speed transport across a stable network core.

In IP-only networks, the core and access routers (aka layer-3 switches) share the same forwarding mechanism (ignoring the option of having default routing in the access layer); if you want to introduce a new protocol (example: IPv6) you have to deploy it on every single router throughout the network, including all core routers.

On the other hand, you can introduce IPv6, IPX or AppleTalk (not really), or anything else in an MPLS network, without upgrading the core routers. The core routers continue to provide a single function: optimal transport based on MPLS paths signaled by the edge routers (either through LDP, MPLS-TE, MPLS-TP or more creative approaches, including NETCONF-configured static MPLS labels).

The same ideas apply to OpenFlow-configured networks. The edge devices have to be smart and support a rich set of flow matching and manipulation functionality; the core (fabric) devices have to match on simple packet tags (VLAN tags, MAC addresses with PBB encapsulation, MPLS tags ...) and provide fast packet forwarding.

Is this an Ivory Tower dream?

Apart from MPLS, there are several real-life SDN implementations of this concept:

  • Nicira’s NVP is providing virtual networking functionality in OpenFlow-controlled hypervisor switches that use simple IP transport (with STT or GRE encapsulation) across the network core;
  • Microsoft’s Hyper-V Network Virtualization uses a similar architecture with PowerShell instead of OpenFlow/OVSDB as the hypervisor configuration API;
  • NEC’s ProgrammableFlow solution uses PF5420 (with 160K OpenFlow entries) at the edge and PF5820 (with 750 full OpenFlow entries and 80K MAC entries) at the core.

Before you mention VXLAN in the comments: I fail to see something software-defined in a technology that uses flooding to learn dynamic VM-MAC-to-VTEP-IP mappings.

3 comments:

  1. Just curious - I've been following SDN blogs for some time. If I understand things correctly it seems that Plexxi and Big Switch decided to drop OpenFlow in flavor of their own protocols due to either hardware just cant support Openflow well or Openflow does not have bells and whistles to do the network abstraction correctly.

    I also thought for some reason that Niciria used an openflow like technology.

    So is Openflow dead and lost the war as the nextgen Virtual Networking solution or am I completely wrong?
    Replies
    1. Plexxi never went anywhere near OpenFlow (they don't need it for what they do), but Big Switch is still using it (AFAIK) but also relying heavily on OF 1.0 vendor extensions to get the job done.
  2. Not sure how I missed this one. I like this line, "The edge devices have to be smart..."
Add comment
Sidebar