It’s been here for a while, but it just hit me today. I’ve got 1Tbit of networking bandwidth in 1U on my hands. 48 10/1 gbit ports (SFP+), 4 40 gbit ports (QSFP), all at full duplex and layer2/3 non-blocking.
That’s an astonishing bandwidth of 1.28Tbits/sec thanks to the Fulcrum Micro chip nestled inside. at 650ns port-to-port latency, it’s also really fast. Not infiniband fast, but good for the Ethernet world.
All of the 1U 10G rack switches these days seem to be cut-through to keep latency ultra-low. We did have an issue recently where this caused a problem. Cut-through switching means that errors you receive on an input port, that aren’t immediately caught, may get propagated to output ports, and that’s what happened to us. Input errors and subsequent output errors brought our network to a crawl when we were getting about .1% errors on a particular port. These were propagating to other ports in the same VLAN. Cut-through switching also means they don’t have to spend a lot of money on big buffers, meaning lower costs for you.
Among the big vendors in this niche are Arista (first to market with 48 ports @ 10G plus front to back or back to front cooling + Layer3 — market leader), Blade, who was recently purchased by IBM, and Force10. There are plenty more in the chassis area, but these 3 vendors, are the low cost high bang-for-density do-ers. We just happen to use force10 since it’s compatible with our other gear and we like some features like FRRP. Even though it’s proprietary, it beats the veritable socks off of spanning tree.
The recent addition of front to back or back to front cooling (swap power supplies and fans) makes these a great top of rack switch for datacenters.
Oh, and it also supports BGP, a feature you don’t often see in this type of product.