New Data Center switches from Force10

Force10 has just launched a new series of data center switches. The ZettaScale switches are, as one would expect from Force10, down-to-earth high-performance low-footprint products – a good option for those network engineers that like building high-density high-performance data centers with minimal feature overload.

All the information in this post is based on the briefing I’ve received from Force10 last week, the draft materials they sent me and the subsequent answers to my questions. I haven’t been able to touch the boxes or read the product documentation yet.

The fixed-configuration Z9000 switch packs 2.5Tbps non-blocking architecture with 128 GE/32 40GE ports in 2RU footprint. As you might know, 40GE uses four multiplexed 10GE channels and Force10 decided to make good use of that fact: the physical switch has 40GE ports; with a breakout cable you get four independent 10GE ports out of each 40GE port.

The modular Z9512 switch (available later this year) has 480 10GE/96 40GE/48 100GE ports and 9.6Tbps of switching capacity in 19RU footprint.

On the L2 side, the chipsets used in both switches are capable of handling TRILL and DCB standards. TRILL functionality will be available in a follow-up software upgrade (no dates yet). The switches also support EVB (Ethernet Virtual Bridging – 802.1Qbg); unfortunately I haven’t seen any host/hypervisor-side EVB implementations yet.

FCoE hasn’t been mentioned anywhere, but if you really want to use it, you can probably get FCoE running across the Z-series switches once their software supports full-blown DCB (the SAN people might be a nervous wreck because the switches between FCoE clients and FCF won’t support FIP snooping; the LAN people will probably say “a separate VLAN is good enough”).

Both products also have full-blown L3 functionality. As with most other Force10 products, the preferred routing protocol is OSPF.

As you see, Force10 hasn’t been infected with the fabric craze yet, but with the port/bandwidth density they offer most data centers won’t need more than a few switches in the near future anyway. Imagine buying four Z9000 switches and using half of the ports for inter-switch connectivity. You get a total of 250 10GE ports connected to an almost non-blocking switching matrix. Alternatively, you could decide to go for 1:3 oversubscription and get 384 10GE ports.

Now imagine connecting Cisco’s rack servers to these four switches. With four 10GE ports per C260 M2 rack server (which is probably quite an overkill) and 1:3 oversubscription between the switches, you can connect up to 96 rack servers to the four switches, giving you a maximum of 96 TB of RAM, 960 Xeon cores and almost a petabyte of disk capacity. Probably more than enough for a small data center.

The only real grudge I have (so far) is with the Force10 marketing department: read the definition of Zetta and try to figure out how many zeroes there are between (almost) 10 terabit switching capacity of a Z9512 and a zettabit.

5 comments:

  1. So hopefully they've improved the code as well.. it's been about a year since I've worked on F10 boxes.. they're great little L2 devices (E1200s, C300s were the one's I've mostly worked with) and some S50s with the old SFTOS=eww..

    Either way, they did ok, but getting features implemented and bugs fixed was a bit of a hassle..

    For those looking at price/performance though, you may not go wrong.
  2. Alot has changed at F10 from a year ago. Hardware and software have both been updated quite a bit, and it's very cool to see them push and support open standards.
  3. Do they support MPLS and MPLS TE on these boxes?
  4. Good question. I would guess the answer is NO.
  5. Exascale platform supports MPLS and MPLS TE.

    Z9512 (NGP) will support MPLS and MPLS TE in first release

    No current plans to support MPLS and MPLS TE on Z9000, S7000 and S4810.
Add comment
Sidebar