History of Ethernet Encapsulations

Henk Smit conscientiously pointed out a major omission I made when summarizing Peter Paluch’s excellent description of how bits get parsed in network headers:

EtherType? What do you mean EtherType? There are/were 4 types of Ethernet encapsulation. Only one of them (ARPA encapsulation) has an EtherType. The other 3 encapsulations do not have an EtherType field.

What is he talking about? Time for another history lesson1.

Ethernet started as a bit of a science project2 at the center of “just good enough” mentality sometimes known as Silicon Valley. To keep things simple, it used:

  • A preamble (to synchronize sender’s and receivers’ clocks)
  • A start frame delimiter (to tell everyone to really start listening)
  • Source and destination MAC addresses
  • 2-byte field identifying higher-layer3 protocol
  • Payload
  • Frame Check Sequence

The 2-byte field I mentioned above is called EtherType. ARPA encapsulation is a Cisco-speak for original Ethernet encapsulation, which could also be called DIX4 Ethernet, or (probably most correctly) Ethernet II framing because the latest version of DIX Ethernet specification was Version 2, published in November 1982. After that, IEEE finally got their act together – it took them almost four years to “standardize” a shipping technology.

Anyway, you might have noticed there’s something missing in the above list of Ethernet frame parts – there’s no end-of-frame delimiter. How do we know we got a valid frame? Remember the “just good enough” approach? Here’s how it works:

  • Original shared-medium Ethernet (10BASE5 and 10BASE2) uses Manchester coding with a transition at the middle of each bit. If there’s no transition, the sender obviously stopped sending.
  • As the receiver is collecting bits into an incoming frame, it’s continuously calculating the Frame Check Sequence.
  • If the receiver perceives the sender stopping at a byte boundary, and the calculated FCS matches the “verify value”, the receiver believes it got a valid frame.

That was obviously not good enough for IEEE purists5, but it’s hard to argue with shipping products when a competing standards body starts to standardize what you’ve been pondering for years, so they reached a “compromise”6:

  • Ethernet will retain it’s good-enough physical layer
  • IEEE version of the Ethernet will have a belt-and-braces length field after the MAC addresses to verify that the frame truly has the right length.
  • EtherTypes will use values above 1500, so it will be evident whether we’re dealing with IEEE encapsulation or Ethernet II encapsulation7.

To be fair, there’s a better reason for the length field. Ethernet frames have a minimum length of 64 bytes8. Not a big deal – add padding to your protocol. Well, IBM didn’t like that argument; they wanted to keep sending SDLC/HDLC frames over LAN networks9.

Anyway, at that point we still had a simple decision to make:

  • Is the 16-bit value after MAC addresses higher than 153610? Must be an EtherType.
  • Otherwise, it’s an IEEE 802.2 packet.

OK, but what’s riding on top of an IEEE 802.2 packet? IEEE’s first idea was to take the OSI stack and put it on top of LAN networks – every packet would have a Source Service Access Point (SSAP) and Destination Service Access Point (DSAP). Why would you need two? Would IP stack ever send a packet to OSI stack? Of course not, but that’s what you get with an academic layered approach.

There was just a bit of a hurdle: tons of companies were interested in Ethernet connectivity in those days, and all of them had proprietary protocols (just look at the reservation ranges for IANA IEEE 802 numbers), but there were only 128 available SAP numbers11. In a wonderful application of RFC 1925 Rule 6a, IEEE reserved SAP value 0xAA to mean SNAP Extension which really means we just wasted 6 bytes to tell you to look at the EtherType that follows.12

So far we have three different Ethernet encapsulation. Hank mentioned four. Welcome to Novell IPX Raw Encapsulation SNAFU. They believed in the magic powers of IEEE (or needed a length field) but couldn’t be bothered waiting for IEEE to agree on LLC2 frame format – IPX payload directly follows the length field without any indication of what higher-layer protocol is riding in the Ethernet packet13.

Fortunately (for everyone else who had to parse their stupidity) IPX packet header format included a checksum as the first two bytes of the packet, and Novell never implemented a proper checksum for IPX, so every IPX packet always started with 0xFFFF14, which would be understood as global broadcast by anyone complying with 802.2 standards, but who’s counting.

Does all of this matter? Not really unless you’re an Ethernet history addict, but it resulted in a nice consulting project for my company in 1990s15.

Local experts in IBM connectivity couldn’t make an IBM AIX server talk to a Cisco router. They were also too thrifty to invest in a protocol analyzer (that thing could cost around $10K in those days) and wanted to borrow ours. In the end we agreed we’d charge them a day of consulting services, come over with the Sniffer and figure out what’s going on. It took me a few moments to realize AIX used SNAP encapsulation, and a few minutes to figure out how to configure that on Cisco routers16. Problem solved.

Just in case you’ll ever have to work with a 30-year-old IBM Unix server: the command to use is arp snap. Cisco IOS constructs layer-2 headers for individual IP next-hops from ARP entries, and the encapsulation used toward a next hop depends on the what encapsulation that next hop used in ARP request/reply. You cannot specify what encapsulation you want to use for IP, but you can specify which encapsulation(s) ARP requests will use. For some other gory details, see RFC 1042.


  1. Before someone asks: I started working with Ethernet in late 1980s, so I was a bit late to the party, but nevertheless had the “privilege” of working with thick yellow cable and vampire taps and lived through the nightmares of thin coax cables being charred by under-the-desk heaters. ↩︎

  2. According to Wikipedia, Robert Metcalfe named it after the luminiferous aether once postulated to exist as an “omnipresent, completely-passive medium for the propagation of electromagnetic waves.” ↩︎

  3. A 7-layer purist would say oh, you mean a layer-3 protocol. Unfortunately we’ve seen applications riding directly on top of Ethernet, many of them coming from Digital (DEC), the infamous inventor of transparent bridge – the kludge needed to support them↩︎

  4. DIX = Digital, Intel, Xerox – the initials of the three major companies pushing early Ethernet in alphabetical order. Even though Ethernet was invented within Xerox PARC, they still got the last place in the acronym. ↩︎

  5. A hint: Token Ring frames have ending delimiter, as do FDDI frames↩︎

  6. See also: a diplomatic explanation of what made ATM cells 53 bytes long ↩︎

  7. Don’t even think about asking what happens when jumbo frames use IEEE encapsulation. ↩︎

  8. We need a minimum frame length to make sure the sender detects a collision (and starts sending gobbledygook that will bork the FCS value) before it finishes sending the frame. The frame length is thus a function of transmission speed and collision domain size, which depended on cable lengths and the number of repeaters in an Ethernet segment. ↩︎

  9. What could be better than taking a protocol that was designed for 1200-baud noisy modem connections and put it onto a 4 Mbps pretty reliable Token Ring LAN? Chalk it up to doing more with less mentality aka it costs too much to redesign our broken stuff↩︎

  10. Where did they get such a weird number? 1536 = 0x600, a value reserved for an old Xerox protocol ↩︎

  11. That’s what you get when you’re approaching a megabit transport technology with a 1200-baud design mindset and try to squeeze everything into one byte where you could easily have two. Oh, and of course every header field must have at least one reserved bit ;) ↩︎

  12. In case you haven’t noticed: they wasted six bytes (plus the length field) because they tried to save two. Good job. ↩︎

  13. Who would want to run anything else but Novell Netware on a LAN anyway? ↩︎

  14. Which is how everyone else identified the idiots on the wire ↩︎

  15. I also had to run through this explanation way too many times while delivering Router Software Configuration (and later ICND) course. ↩︎

  16. Cisco software had no ?-triggered help until Terry Slattery got a contract to rewrite its CLI parsers, so I had to go through the documentation. ↩︎

4 comments:

  1. Fantastic post, thanks for the info. I was curious about the end-of-frame delimitation in the original Ethernet, but never found how (rudimentarily) it was implemented.

    Faster Ethernet implementations do not employ Manchester, so the trick of "no symbol transition and CRC passes" is no longer valid for the (most widely used) DIX encapsulation. As far as I understand, they always rely on a special control code of the 4B5B (or 8B10B, 64B66B, ...) line code.

  2. I remember classic Mac OS had a setting for Ethernet II vs. 802.3 mode. Sometimes people would choose 802.3 thinking it must be "more standard" but it would fail to interoperate with their LAN.

  3. Yep, the Novell IPX Raw encapsulation was the fourth one. I do not expect many people to have heard of that one. But of course I am not surprised you remember it.

    Glad to see that my dumb remarks actually result in smart posts. :)

  4. I'd love to see the original 3 Mb ethernet included in your ethernet history, for reference -- https://bitsavers.computerhistory.org/pdf/xerox/ethernet_3mb/Practical_Considerations_in_Ethernet_Local_Network_Design_Feb1980.pdf -- the packet format was

    [sync bit][8 bit dest][8b src][16b ethertype][0-277 16b words data][16b CRC]

Add comment
Sidebar