What's In A Number? Er, Lots Of Packets Per Second...

I remember when first testing Gigabit Ethernet (Packet Engines and Foundry Networks for the record) and thinking: “how do we harness this much bandwidth – maybe take a TDM approach and slice it? After all, who needs a gig of bandwidth direct to the desktop?”

Well, that was still a relevant point as only servers were really being fitted with Gigabit Ethernet NICs in ’98/’99 and even then they could rarely handle that level of traffic. Especially the OS/2 based ones… However, as a backbone network tech it was top notch. In a related metric, testing switches and appliances such as Load-Balancers and getting throughput in packets per second terms in excess of 100,000 pps was mad, crazy – the future! So, Mellanox (newly in the hands of nvidia – trust me, the latter do more that make graphics cards!) has just released the latest of its Ethernet Cloud Fabric products with lots of “Gigabit Ethernet” in it – i.e. 100/200/400 GbE switches. I’ll match that and raise you 400…

And forget six figure pps throughput figures. How about 8.3bn pps instead? You can’t test that with 30 NT servers -)

If you’re wondering who needs that level of performance in the DC, think no further than the millions of customers being serviced by AWS, Azure and all the other CSPs – that’s a lot of traffic being handled by the cloudy types on your behalf. And then there’s research, gas and oil/exploration, FinTec, the M&E guys and, not least – given the nvidia ownership – extreme virtual reality. Bring on Terabit Ethernet…

Meantime, I think we’ll need a lot of 5G deployment at the other end of the data chain! But that’s another discussion for another day. And whatever did happen to WiMAX?

CIO
Security
Networking
Data Center
Data Management
Close