Category: design

Repost: Think About the 99% of the Users

Daniel left a very relevant comment on my Data Center Fabric Designs: Size Matters blog post, describing how everyone rushes to sell the newest gizmos and technologies to the unsuspecting (and sometimes too-awed) users1:


Absolutely right. I’m working at an MSP, and we do a lot of project work for enterprises with between 500 and 2000 people. That means the IT department is not that big; it’s usually just a cost center for them.

read more add comment

EVPN Designs: VXLAN Leaf-and-Spine Fabric

In this series of blog posts, we’ll explore numerous routing protocol designs that can be used to implement EVPN-with-VXLAN L2VPNs in a leaf-and-spine data center fabric. Every design will come with a companion netlab topology you can use to create a lab and explore the behavior of leaf- and spine switches.

Our leaf-and-spine fabric will have four leaves and two spines (but feel free to adjust the lab topology fabric parameters to build larger fabrics). The fabric will provide layer-2 connectivity to orange and blue VLANs. Two hosts will be connected to each VLAN to check end-to-end connectivity.

read more see 3 comments

Repost: EBGP-Mostly Service Provider Network

Daryll Swer left a long comment describing how he designed a Service Provider network running in numerous private autonomous systems. While I might not agree with everything he wrote, it’s an interesting idea and conceptually pretty similar to what we did 25 years ago (IBGP without IGP, running across physical interfaces, with every router being a route-reflector client of every other router), or how some very large networks were using BGP confederations.

Just remember (as someone from Cisco TAC told me in those days) that “you might be the only one in the world doing it and might hit bugs no one has seen before.”

read more see 1 comments

Data Center Fabric Designs: Size Matters

The “should we use the same vendor for fabric spines and leaves?” discussion triggered the expected counterexamples. Here’s one:

I actually have worked with a few orgs that mix vendors at both spine and leaf layer. Can’t take names but they run fairly large streaming services. To me it seems like a play to avoid vendor lock-in, drive price points down and be in front of supply chain issues.

As always, one has to keep two things in mind:

read more see 2 comments

BGP AS Numbers for a Private MPLS/VPN Backbone

One of my readers was building a private MPLS/VPN backbone and wondered whether they should use their public AS number or a private AS number for the backbone. Usually, it doesn’t matter; the deciding point was the way they want to connect to the public Internet:

We also plan to peer with multiple external ISPs to advertise our public IP space not directly from our PE routers but from dedicated Internet Routers, adding a firewall between our PEs and external Internet routers.

They could either run BGP between the PE routers, firewall, and WAN routers (see BGP as High-Availability Protocol for more details) or run BGP across a bump-in-the-wire firewall:

read more add comment

Small Site EBGP-Only Design

One of my subscribers found an unusual BGP specimen in the wild:

  • It was a small site with two core switches and a WAN edge router
  • The site had VPN concentrators running in virtual machines
  • The WAN edge router was running BGP across WAN IPsec tunnels
  • The VPN concentrators were running BGP with core switches.

So far so good, and kudos to whoever realized BGP is the only sane protocol to run between virtual machines and network core. However, the routing in the network core was implemented with EBGP sessions between the three core devices, and my subscriber thought the correct way to do it would be to use IBGP and OSPF.

read more add comment

Building a Small Network with ChatGPT

I must be a good prompt engineer – every time I ask ChatGPT something really simple it spews out nonsense. This time I asked it to build a small network with four routers:

I have a network with four Cisco routers (A,B,C,D). They are connected as follow: A-B, B-C, A-D, D-C. Each router has a loopback interface. Create router configurations that will result in A being able to reach loopback interfaces of all other routers.

Here’s what I got back1:

Here’s an example configuration for the four routers that should allow Router A to reach the loopback interfaces of all other routers:

read more see 1 comments

What Happened to Leaf Switches with Four Uplinks?

The last time I spent days poring over vendor datasheets collecting information for the overview part of Data Center Fabrics webinar a lot of 1RU data center leaf switches came in two form factors:

  • 48 low-speed server-facing ports and 4 high-speed uplinks
  • 32 high-speed ports that you could break out into four times as many low-speed ports (but not all of them)

I expected the ratios to stay the same when the industry moved from 10/40 GE to 25/100 GE switches. I was wrong – most 1RU leaf data center switches based on recent Broadcom silicon (Trident-3 or Trident-4) have between eight and twelve uplinks.

read more see 1 comments

External Links on Spine Switches

A networking engineer attending the Building Next-Generation Data Center online course asked this question:

What is the best practice to connect DC fabric to outside world assuming there are 2 spine switches in the fabric and EVPN VXLAN is used as overlay? Is it a good idea to introduce edge (border) switches, or it is better to connect outside world directly to the spine?

As always, the answer is “it depends,” this time based on:

read more see 4 comments

First Steps in IPv6 Deployments

Even though IPv6 could buy its own beer (in US, let alone rest of the world), networking engineers still struggle with its deployment – one of the first questions I got in the ipSpace.net Design Clinic was:

We have been tasked to start IPv6 planning. Can we discuss (for enterprises like us who all of the sudden want IPv6) which design paths to take?

I did my best to answer this question and describe the basics of creating an IPv6 addressing plan. For even more details, watch the IPv6 webinars (most of them at least a few years old, but nothing changed in the IPv6 world in the meantime apart from the SRv6 madness).

add comment

Leaf-and-Spine Fabrics Between Theory and Reality

I’m always envious of how easy networking challenges seem when you’re solving them in PowerPoint, for example, when an innovation specialist explains how scalability works in leaf-and-spine fabrics in a LinkedIn comment:

One of the main benefits of a CLOS folded spine topology is the scale out spine where you can scale out the number of spine nodes increasing your leaf-spine n-way ECMP as well as minimizing the blast radius with the more spine nodes the more redundancy and resiliency.

Isn’t that wonderful? If you need more bandwidth, sprinkle the magic spine powder on your fabric, add water, and voila! Problem solved. Also, it looks like adding spine switches reduces the blast radius. Who would have known?

read more see 3 comments

Alternatives to IBGP within Multihomed Sites

Two weeks ago I explained why you might want to run IBGP between CE-routers on a multihomed site. One of the blog readers didn’t like my ideas:

In such a small deployment I assume that both ISPs offer transit, so that both CEs would get a default route from their upstream.

In this case I would not iBGP the CEs together but have HSRP running on the two CEs and track the uplink (interface and/of BGP session) to determine the active gateway.

Let’s see what could possibly go wrong with that design.

read more see 3 comments
Sidebar