Within the general misty definition of "Cloud", sometimes something pokes through the veil of ether-precipitation that says "I'm new and I make sense".
And typically, it's not a variation on that other "Somehow Defines Nothing" Hype-TLA that is SDN, but more akin to the style of Python "And now for something completely different". In this case it comes from a UK start-up Fedr8. Ok, so the name sounds more like a courier company, but stick with me...
Rather than focusing on Cloud storage or performance, Fedr8 is focusing on making sure your existing applications will actually work in that environment in the first place. Kind of akin to avoiding the scenario where you buy a large American car before you measure the size of your garage. The product itself, Argentum, provides compatibility analysis and optimisation for in-house applications, prior to cloud delivery. It provides organisations with a suite of tools that can assess, analyse and optimise existing applications, enabling organisations to design successful cloud projects and migrate applications without even thinking about the pain, system, effort and time in attempting to do it manually. Or simply guessing...
To date Argentum has been piloted on Open Source applications developed by companies including Netflix, Twitter and IBM, so no big names there then! How, then, does it work? In layman's terms it analyses the source code of any application, in any programming language, and then provides actionable intelligence to help a company move those existing apps into the cloud - hence "federate" the services! So, what's in a name? Lots it seems -) At a slightly more technical level, code is uploaded to the Argentum platform where it undergoes a complex analysis and is split into objectified tokens. These tokens populate a meta database against which queries are run. From this, out pops a visualisation of the application and actionable business intelligence to enable successful cloud adoption.
Sounds great in theory, and looks a must for Broadband-Testing to put through its paces; not least because there is a Groundhog Day moment here. Yes, the product is innovative, BUT there is an eerie resemblance to that of a former client, AppDNA, whose product analysed applications for migration between Microsoft OSs and browser versions. So, same concept, different application (in every sense) and, indeed, why not? Especially since AppDNA ultimately got acquired by Citrix for more than a few quid. Now that's a precedent I suspect the Fedr8 board will be quite sweet on...
- Manage entire Wi-Fi network from a single dashboard
- Control multi-tenant Wi-Fi networks, applications and devices
- Application-based virtual networks
- Cost effective 3rd party hardware
- OpenFlow enabled API
Interesting how some elements of IT seem to be around forever without being cracked.
I remember working with a couple of UK start-ups in the 90s on network, server and application capacity planning and automation of resource allocation etc - and the problem was that the rate of change always exceeded our capabilities to keep up with. Moving into the virtualised world just seemed to make the trick even harder.
Now, I'm not sure if IT has slowed down (surely not!) or whether the developers are simply getting smarter, but there do seem to be solutions around now to do the job. Latest example is from CiRBA - where the idea is to enable a company to see the true amount of server and storage resources required versus the amount that is currently allocated by application, department or operating division in virtualised and cloud infrastructures, not simply in static environments. The result? Better allocation and infrastructure decisions, reducing risk and eliminating over-provisioning, at least if they use it correctly!
If it resolves the ever-lasting issue of over-provisioning and the $$$$ that goes with that, then praise be to the god of virtualisation... Who's called what exactly? So the idea with CiRBA's new, and snappily titled, Automated Capacity Control software, is to actively balance capacity supply with application demand by providing complete visibility into server, storage and network capacity based on both existing and future workload requirements. The software is designed to accurately determine the true capacity requirements based on technical, operational and business policies as well as historical workload analysis, all of which is required to get the correct answer pumped out at the end.
So, bit by bit, it looks like we're cracking the 2014 virtualised network management problem. Look out for an article by me on deploying and managing distributed virtualised networks in CW in the near future...
Just finished some testing at test equipment partner Spirent's offices in glamorous Crawley with client Voipex - some fab results on VoIP optimization so watch this space for the forthcoming report - and it made me think just how different testing is now.
In the old days of Ethernet switch testing and the like, it was all very straightforward. Now, however, we're in the realms of multiple layers of software all delivering optimisation of one form or another, such as the aforementioned Voipex, but equally with less obviously benchmarked elements such as business flow processes. Yet we really do need to measure the impact of software in these areas in order to validate the vendor claims.
One example is with TIBCO - essentially automating data processing and business flows across all parts of the networks (so we're talking out to mobile devices etc) in real-time. Data integration has always been a fundamental problem - and requirement - for companies, both in terms of feeding data to applications and to physical devices, but now that issue is clearly both more fundamental, and more difficult, than ever in our virtual world of big data = unorganised chaos in its basic form.
TIBCO has just launched the latest version of its snappily-named ActiveMatrix BusinessWorks product and the company claims that it massively increases the speed with which new solutions can be configured and deployed, a single platform to transform lots of data into efficiently delivered data and lots of other good stuff. In a Etherworld that is made up of thousands of virtual elements now, and that is constantly changing in topology, this is important stuff.
As TIBCO put it themselves, "Organisations are no longer just integrating internal ERP, SaaS, custom or legacy applications; today they're exposing the data that fuels their mobile applications, web channels and open APIs." Without serious management and optimisation that's a disaster waiting to happen.
Just one more performance test scenario for me to get my head around then....
We've proved with tests in the past that latency, packet loss and jitter all have very significant impacts on bandwidth utilisation as the size of the network connection increases.
For example, when we set up tests with a 10Gbps WAN link and round trip latencies varying from 50ms to 250ms to simulate national and international (e.g. LA to Bangalore for the latter) connections, we struggled to use even 20% of the available bandwidth in some cases with a "vanilla" - i.e. non-optimised - setup.
Current "behind closed doors" testing is showing performance of between 800KBps- 1GBps (that's gigabYTEs) on a 10Gbps connection but we're looking to improve upon that.
We're also asking the question - can you even fill a pipe when the operational environment is ideal - i.e. low latency and minimal jitter and packet loss for TCP traffic? - and the answer is absolutely not necessarily; not without some form of optimisation, that is.
Obviously, some tweaking of server hardware will provide "some" improvement, but not significant in testing we've done in the past. Adam Hill, CTO of our client Voipex, offered some advice here:
"The bottom line is that, in this scenario, we are almost certainly facing several issues. The ones which ViBE (Voipex's technology) would solve ( and probably the most likely of their problems ) are:
1) It decouples the TCP throughput from the latency and jitter component by controlling the TCP congestion algorithm itself rather than allowing the end user devices to do that.
2) It decouples the MTU from that of the underlying network, so that MTU sizes can be set very large on the end user devices regardless of whether the underlying network supports such large MTUs."
Other things to consider are frame, window and buffer sizes, relating to whichever specific server OS is being used (this is a fundamental of TCP optimisation), but thereafter we really are treading on new ground. Which is fun, After all, the generation of WanOp products that have dominated for the past decade were not designed with 10Gbps+ links in mind.
Anyway - this is a purely "to set the ball rolling" entry and I welcome all responses, suggestions etc, as we look to the future and filling 40Gbps and 100Gbps pipes - yes they will arrive in the mainstream at some point in the not massively distant future!
"In its day-to-day business, HP revealed it had had another predictably awful quarter at Personal Systems, with revenue down 14% as the unit fought for its piece of the ever-shrinking PC market. Printing sales were down 5%, Services declined 6% and ESSN declined 9%, with growth in Networking offset by shrinkage in Industry Standard Servers and Storage, while Business Critical Servers dropped 25%."
Some of you may have seen earlier blogs, and even the Broadband-Testing report, on our recently acquired US client Talari Networks, whose technology basically lets you combine multiple broadband Internet connections (and operators) to give you the five-nine's levels of reliability (and performance) associated with them damnedly expensive MPLS-based networks, for a lot less dosh.
You can actually connect up to eight different operators, though according to Talari, this was not enough for one potential customer who said "but what if all eight networks go down at the same time?" Would dread having to provide the budget for that bloke's dinner parties - "yes I know we've only got four guests, but I thought we should do 24 of each course, just in case there's a failure or two..."
Anyway - one potential issue (other than paranoia) for some was the entry cost; not crazy money but not pennies either. So, it makes sense for Talari to move "up" in the world, so that the relative entry cost is less significant and that's exactly what they've done with the launch of the high(er)-end Talari Mercury T5000 - a product designed for applications such as call centres that have the utmost requirements for reliability and performance and where that entry cost is hugely insignificant once it saves a few outages; or even just the one.
If you still haven't got wot they do, in Talari-ese it provides "end-to-end QoS across multiple, simultaneous, disparate WAN networks, combining them into a seamless constantly monitored secure virtual WAN". Or, put another way, it gives you more resilience (and typically more performance) than an MPLS-based network for a lot lower OpEx.
So where exactly does it play? The T5000 supports bandwidth aggregation up to 3.0Gbps upstream/3.0 Gbps downstream across, of course, up to eight WAN connections. It also acts as a control unit for all other Talari appliances, including the T510 for SOHO and small branch offices, and the T730, T750 and T3000 for large branch offices and corporate/main headquarters, for up to 128 branch connections.
I's pretty flexible then, and just to double-check, we're going to be let loose on the new product in the new year, so watcheth this space...
Following on from last week's OD of SDN at Netevents, we have some proper, physical (ironically) SDN presence in the launch of an SDN controller from HP.
This complete the story I covered this summer of HPs SDN solution - the Virtual Application Network - which we're still hoping to test asap. Basically the controller gives you an option of proprietary or open (OpenFlow), or both.
The controller, according to the HP blurb, moves network intelligence from the hardware to the software layer, giving businesses a centralised view of their network and a way to automate the configuration of devices in the infrastructure. In addition, APIs will be available, so that third-party developers can create enterprise applications for these networks. HPs own examples include Sentinel Security - a product for network access control and intrusion prevention and some Virtual Cloud Networks software, which will enable cloud providers to bring to market more automated and scalable public-cloud services.
Now it's a case of seeing is believing - bring it on HP!
And here's my tip for next buzz-phrase mania - "Data Centre In A Box"; you heard it here (if not) first...
- Big Data Trailblazers
- Cloud Trailblazers
- Emerging Markets Trailblazers
- Mobile Technology Trailblazers
- Networking Trailblazers
- Security Trailblazers
- Storage Trailblazers
- Sustainable IT Trailblazers
- Virtualization Trailblazers
One of the problems we've faced in trying to maximise throughput in the past has not been at the network - say WAN - level, but what happens once you get that (big) data off the network and try to store at the same speed directly onto the storage.
We saw this limitation, for example, last year, when testing with Isilon and Talon Data and using traditional storage technology - the 10gigabit line speeds we were achieving with the Talon Data just couldn't be sustained when transferring all that data onto the storage cluster. While we believe that regular SSD (Solid State Disk) technology would have provided a slight improvement, we still wouldn't have been talking end-to-end consistent, top-level performance.
So it's with some interest - to say the least - that I've started working with a US start-up, Constant Velocity Technology, that reckons it has the capability to solve exactly this problem. We're currently looking to put together a test with them: http://johnpaulmatlick.wix.com/cvt-web-site-iii - and another "big data" high-speed transfer technology client of mine, Bitspeed, with a view to proving we can do 10Gbps, end-to-end, from disk to disk.
Even more interesting, this is happening in "Hollywood" in one of the big-name M&E companies there. However, if any of you reading this are server vendors, then please get in touch as we need a pair of serious servers (without storage) to assist with the project!
Life beyond networking...
-- Advertisement --