Big Data = Big Transfer Speeds?

| No Comments | No TrackBacks
| More
So - there's been all this talk about Big Data and how it's replaced classic transactional processing in many application instances.

This much is true - what hasn't been discussed much, however, is how this impacts on performance; big data - say digital video - has hugely different transfer characteristics to transactional processing and it simply doesn't follow that supplying "big bandwidth" means "big performance" for "big data" transfers.

For example - I'm currently consulting on a project in the world of Hollywood media and entertainment, where the name of the game is transferring digital video files as quickly (and accurately - all must be in sequence) as possible. The problem is that, simply providing a 10Gig pipe doesn't mean you can actually fill it!

We've proved with tests in the past that latency, packet loss and jitter all have very significant impacts on bandwidth utilisation as the size of the network connection increases.

 

For example, when we set up tests with a 10Gbps WAN link and round trip latencies varying from 50ms to 250ms to simulate national and international (e.g. LA to Bangalore for the latter) connections, we struggled to use even 20% of the available bandwidth in some cases with a "vanilla" - i.e. non-optimised - setup.

 

Current "behind closed doors" testing is showing performance of between 800KBps- 1GBps (that's gigabYTEs) on a 10Gbps connection but we're looking to  improve upon that.

 

We're also asking the question - can you even fill a pipe when the operational environment is ideal - i.e. low latency and minimal jitter and packet loss for TCP traffic? - and the answer is absolutely not necessarily; not without some form of optimisation, that is.


Obviously, some tweaking of server hardware will provide "some" improvement, but not significant in testing we've done in the past. Adam Hill, CTO of our client Voipex, offered some advice here:


"The bottom line is that, in this scenario, we are almost certainly facing several issues. The ones which ViBE (Voipex's technology) would solve ( and probably the most likely of their problems ) are:

 

1) It decouples the TCP throughput from the latency and jitter component by controlling the TCP congestion algorithm itself rather than allowing the end user devices to do that.

 

2) It decouples the MTU from that of the underlying network, so that MTU sizes can be set very large on the end user devices regardless of whether the underlying network supports such large MTUs."


Other things to consider are frame, window and buffer sizes, relating to whichever specific server OS is being used (this is a fundamental of TCP optimisation), but thereafter we really are treading on new ground. Which is fun, After all, the generation of WanOp products that have dominated for the past decade were not designed with 10Gbps+ links in mind.


Anyway - this is a purely "to set the ball rolling" entry and I welcome all responses, suggestions etc, as we look to the future and filling 40Gbps and 100Gbps pipes  - yes they will arrive in the mainstream at some point in the not massively distant future!   


Enhanced by Zemanta

No TrackBacks

TrackBack URL: http://www.computerweekly.com/cgi-bin/mt-tb.cgi/50454

Leave a comment