Finally Solving Network and Storage Capacity Planning?

| No Comments | No TrackBacks
| More

Interesting how some elements of IT seem to be around forever without being cracked.

remember working with a couple of UK start-ups in the 90s on network, server and application capacity planning and automation of resource allocation etc - and the problem was that the rate of change always exceeded our capabilities to keep up with. Moving into the virtualised world just seemed to make the trick even harder.

Now, I'm not sure if IT has slowed down (surely not!) or whether the developers are simply getting smarter, but there do seem to be solutions around now to do the job. Latest example is from CiRBA - where the idea is to enable a company to see the true amount of server and storage resources required versus the amount that is currently allocated by application, department or operating division in virtualised and cloud infrastructures, not simply in static environments. The result? Better allocation and infrastructure decisions, reducing risk and eliminating over-provisioning, at least if they use it correctly!

If it resolves the ever-lasting issue of over-provisioning and the $$$$ that goes with that, then praise be to the god of virtualisation... Who's called what exactly?  So the idea with CiRBA's new, and snappily titled, Automated Capacity Control software, is to actively balance capacity supply with application demand by providing complete visibility into server, storage and network capacity based on both existing and future workload requirements.  The software is designed to accurately determine the true capacity requirements based on technical, operational and business policies as well as historical workload analysis, all of which is required to get the correct answer pumped out at the end.

So, bit by bit, it looks like we're cracking the 2014 virtualised network management problem. Look out for an article by me on deploying and managing distributed virtualised networks in CW in the near future...

 

Enhanced by Zemanta

Benchmarking Integration...

| No Comments | No TrackBacks
| More

Just finished some testing at test equipment partner Spirent's offices in glamorous Crawley with client Voipex - some fab results on VoIP optimization so watch this space for the forthcoming report - and it made me think just how different testing is now.

In the old days of Ethernet switch testing and the like, it was all very straightforward. Now, however, we're in the realms of multiple layers of software all delivering optimisation of one form or another, such as the aforementioned Voipex, but equally with less obviously benchmarked elements such as business flow processes. Yet we really do need to measure the impact of software in these areas in order to validate the vendor claims. 

One example is with TIBCO - essentially automating data processing and business flows across all parts of the networks (so we're talking out to mobile devices etc) in real-time. Data integration has always been a fundamental problem - and requirement - for companies, both in terms of feeding data to applications and to physical devices, but now that issue is clearly both more fundamental, and more difficult, than ever in our virtual world of big data = unorganised chaos in its basic form.

TIBCO has just launched the latest version of its snappily-named ActiveMatrix BusinessWorks product and the company claims that it massively increases the speed with which new solutions can be configured and deployed, a single platform to transform lots of data into efficiently delivered data and lots of other good stuff. In a Etherworld that is made up of thousands of virtual elements now, and that is constantly changing in topology, this is important stuff.

As TIBCO put it themselves, "Organisations are no longer just integrating internal ERP, SaaS, custom or legacy applications; today they're exposing the data that fuels their mobile applications, web channels and open APIs." Without serious management and optimisation that's a disaster waiting to happen.

 Just one more performance test scenario for me to get my head around then....

 

Enhanced by Zemanta

Reinventing Network Management

| No Comments | No TrackBacks
| More
Network management is the proverbial bus syndrome - nothing shows up forever and then there's a whole queue of them  - in this case interesting technologies, but here's the really interesting one I'm about to start testing with - Moogsoft.

Think radical invention of network management and you're getting there - the name and website give some clues I guess that this isn't mainstream, me-too stuff... 

So here's the problem - network management, even in its modern incarnation of application performance monitoring and all the variations on a theme, is all about some concept of a network configuration being stable and predictable. So the idea is that you, over time, build up a "rich database" of information regarding all elements of the network - hardware and software - so that there's a level of intelligence to access when identifying problems (and potential problems). OK, except that, if you have a network deployment that is part cloud (or managed service of some description), part-virtualised and otherwise outsourced to some extent, how can you possibly know what the shape of that network is. Even as the service provider you cannot... 

It therefore doesn't matter how much networking data you collect  - effectively you have to start from scratch every time, because the network is dynamic, not static, so any historical data is not necessarily correct. And we all know what happens in you make decisions based on inaccurate data... Moogsoft therefore says, forget about the existing methodologies - they don't work any longer. Instead it uses algorithm-based techniques to establish concurrent relationships between all aspects of the network when an anomalous situation is identified - looking at every possible cause-effect possibility. And it works in a true, collaborative environment - after all, network management is not detached from other aspects of the network in the same way that user A in department X is not detached from user B in department Y. So every "element" of the "network" is relevant and involved.

Sounds like an impossible to scale scenario? Well how about handling 170m incidents a day? And taking resolution time down from hours and days to minutes? Sounds too good to be true? Maybe so, but these are recorded results involving a famous, global Internet player.

Watch out for the Broadband-Testing report on the Moogsoft solution - should be somewhat interesting!!!!

Enhanced by Zemanta

Trouble-Ticketing - The Future Is Here!

| No Comments | No TrackBacks
| More
IT doesn't so much go round in circles as overlapping rings - think the Olympic sign. That's to say, it does repeat itself but with additions and a slightly changed working environment each time.

So, for the past few decades we've had good old Helpdesk, Trouble-Ticketing and related applications in every half-decent sized network across the globe; incredibly conservative, does what it says on the tin applications, typically created by incredibly conservative, does what it says on the tin ISVs. Nowt wrong with that, but nothing to get excited about either.

However, in a chat last week with the guys from Autotask at a gig in Barca - these guys have been around for over a decade, building business steadily... until recently, that is, since when they've been expanding faster than the average Brit's waistline (and that's some expansion rate!). 

So why? How can a humdrum, take it for granted network app suddenly become "sexy"? Speaking with Mark Cattini, CEO of Autotask, a couple of points immediately make things clearer. One is that we have that rare example of a Brit in charge of a US company (since three years ago), and a Brit who's seen it all from both sides of the fence, pond and universe. So he understands the concept of "International". Secondly, we have an instance of a product having been written from day one as a SaaS application, long before SaaS was invented - think about the biz flow product I've spoken about (and tested) many times here - Thingamy - and it's the same story, just a complimentary app that is all part of the "bigger picture".

The cloud, being forced on the IT world, is perfect for the likes of Autotask. It gives them the deployment and management flexibility that enables a so-called deluxe trouble-ticketing and workflow app to become a fundamental tool for the day-to-day running of a network (and a business) on a global scale. I was talking the other day with another ITSM client of mine, Laurence Coady of Richmond Systems, and he was saying how the cloud has enabled the company's web-enabled version of its ITSM suite to go global from an office in Hampshire, with virtually no sales and marketing costs involved, thanks to the likes of Amazon's cloud.

Mark Cattini spoke about his pre-Autotask days including a long stint in International sales with Lotus Notes. I made the point that Notes created an entire sub-industry with literally thousands of apps designed specifically to work with and support Notes - almost a pre-Internet Internet. While it seems absurd to say that something as "long-winded" as ITSM-related products can become the next Notes, think about it in a business/workflow perspective within a cloud infrastructure, given that there are open APIs to all this software (so anyone can join in) and given that no one really knows what "big data" is and we have a genuine infrastructure for building the next generation networks on - real software defined networking in other words!


Enhanced by Zemanta

Big Data = Big Transfer Speeds?

| No Comments | No TrackBacks
| More
So - there's been all this talk about Big Data and how it's replaced classic transactional processing in many application instances.

This much is true - what hasn't been discussed much, however, is how this impacts on performance; big data - say digital video - has hugely different transfer characteristics to transactional processing and it simply doesn't follow that supplying "big bandwidth" means "big performance" for "big data" transfers.

For example - I'm currently consulting on a project in the world of Hollywood media and entertainment, where the name of the game is transferring digital video files as quickly (and accurately - all must be in sequence) as possible. The problem is that, simply providing a 10Gig pipe doesn't mean you can actually fill it!

We've proved with tests in the past that latency, packet loss and jitter all have very significant impacts on bandwidth utilisation as the size of the network connection increases.

 

For example, when we set up tests with a 10Gbps WAN link and round trip latencies varying from 50ms to 250ms to simulate national and international (e.g. LA to Bangalore for the latter) connections, we struggled to use even 20% of the available bandwidth in some cases with a "vanilla" - i.e. non-optimised - setup.

 

Current "behind closed doors" testing is showing performance of between 800KBps- 1GBps (that's gigabYTEs) on a 10Gbps connection but we're looking to  improve upon that.

 

We're also asking the question - can you even fill a pipe when the operational environment is ideal - i.e. low latency and minimal jitter and packet loss for TCP traffic? - and the answer is absolutely not necessarily; not without some form of optimisation, that is.


Obviously, some tweaking of server hardware will provide "some" improvement, but not significant in testing we've done in the past. Adam Hill, CTO of our client Voipex, offered some advice here:


"The bottom line is that, in this scenario, we are almost certainly facing several issues. The ones which ViBE (Voipex's technology) would solve ( and probably the most likely of their problems ) are:

 

1) It decouples the TCP throughput from the latency and jitter component by controlling the TCP congestion algorithm itself rather than allowing the end user devices to do that.

 

2) It decouples the MTU from that of the underlying network, so that MTU sizes can be set very large on the end user devices regardless of whether the underlying network supports such large MTUs."


Other things to consider are frame, window and buffer sizes, relating to whichever specific server OS is being used (this is a fundamental of TCP optimisation), but thereafter we really are treading on new ground. Which is fun, After all, the generation of WanOp products that have dominated for the past decade were not designed with 10Gbps+ links in mind.


Anyway - this is a purely "to set the ball rolling" entry and I welcome all responses, suggestions etc, as we look to the future and filling 40Gbps and 100Gbps pipes  - yes they will arrive in the mainstream at some point in the not massively distant future!   


Enhanced by Zemanta






Anything New For 2013?

| No Comments | No TrackBacks
| More
So here we are, already several weeks into 2013 and is there anything new to report on the networking front?

Not really - currently the same stories as we've been hearing for the past year or two - SDN, Cloud etc.... I am, at least, about to put some element of cloud to the test with Aryaka - WanOp as a cloud-based service. More details on this shortly, but we will be testing it as a regular customer; i.e. remote login via the Internet etc, so this will be a true user-style test case.

Also just finished some repeat testing with an old client, Voipex - the company has always had an excellent VoIP optimisation story but now it has added lots of data networking functionality that gives it a very different angle to the default WanOp players. The report will be appearing shortly on the Broadband-Testing website.

Meantime, back to the world of SDN etc - is anyone really buying into it properly at the moment, rather than just a bit of toe-dipping with OpenFlow etc? That question applies equally to end users and vendors... Or are we simply in another of those eras of solutions seeking problems?

Answers on the back of a hybrid real/virtual postcard in a dropbox at the end of your 'net connection!


Enhanced by Zemanta

Did You Know That HP Is A Networking Company?

| No Comments | No TrackBacks
| More
Thus it has always been so - HP networking, AKA ProCurve in the "old days", has been a success in spite of its "parent" company and, today, amidst the doom and gloom financial results the company has posted, and all the Autonomy naming, blaming and shaming going on, I couldn't help but help notice three little but significant words in one paragraph of the story in Microscope - Note - paragraph from one of the many HP/Autonomy stories around, focusing on the doom and gloom of HP losing money "everywhere", but note the three magic words I've highlighted in the snippet below. See if you can spot them...

"In its day-to-day business, HP revealed it had had another predictably awful quarter at Personal Systems, with revenue down 14% as the unit fought for its piece of the ever-shrinking PC market. Printing sales were down 5%, Services declined 6% and ESSN declined 9%, with growth in Networking offset by shrinkage in Industry Standard Servers and Storage, while Business Critical Servers dropped 25%."

Nothing changes...


Enhanced by Zemanta

Talari - It's Not Channel Bonding!

| No Comments | No TrackBacks
| More

Some of you may have seen earlier blogs, and even the Broadband-Testing report, on our recently acquired US client Talari Networks, whose technology basically lets you combine multiple broadband Internet connections (and operators) to give you the five-nine's levels of reliability (and performance) associated with them damnedly expensive MPLS-based networks, for a lot less dosh.

 You can actually connect up to eight different operators, though according to Talari, this was not enough for one potential customer who said "but what if all eight networks go down at the same time?" Would dread having to provide the budget for that bloke's dinner parties - "yes I know we've only got four guests, but I thought we should do 24 of each course, just in case there's a failure or two..."

Anyway - one potential issue (other than paranoia) for some was the entry cost; not crazy money but not pennies either. So, it makes sense for Talari to move "up" in the world, so that the relative entry cost is less significant and that's exactly what they've done with the launch of the high(er)-end Talari Mercury T5000 - a product designed for applications such as call centres that have the utmost requirements for reliability and performance and where that entry cost is hugely insignificant once it saves a few outages; or even just the one.

If you still haven't got wot they do, in Talari-ese it provides "end-to-end QoS across multiple, simultaneous, disparate WAN networks, combining them into a seamless constantly monitored secure virtual WAN".  Or, put another way, it gives you more resilience (and typically more performance) than an MPLS-based network for a lot lower OpEx.

So where exactly does it play? The T5000 supports bandwidth aggregation up to 3.0Gbps upstream/3.0 Gbps downstream across, of course, up to eight WAN connections. It also acts as a control unit for all other Talari appliances, including the T510 for SOHO and small branch offices, and the T730, T750 and T3000 for large branch offices and corporate/main headquarters, for up to 128 branch connections.

I's pretty flexible then, and just to double-check, we're going to be let loose on the new product in the new year, so watcheth this space...


Enhanced by Zemanta

Physical Software Defined Networking...

| No Comments | No TrackBacks
| More

Following on from last week's OD of SDN at Netevents, we have some proper, physical (ironically) SDN presence in the launch of an SDN controller from HP.

 

This complete the story I covered this summer of HPs SDN solution - the Virtual Application Network - which we're still hoping to test asap. Basically the controller gives you an option of proprietary or open (OpenFlow), or both.

 

The controller, according to the HP blurb, moves network intelligence from the hardware to the software layer, giving businesses a centralised view of their network and a way to automate the configuration of devices in the infrastructure. In addition, APIs will be available, so that third-party developers can create enterprise applications for these networks. HPs own examples include Sentinel Security - a product for network access control and intrusion prevention and some Virtual Cloud Networks software, which will enable cloud providers to bring to market more automated and scalable public-cloud services.

 

Now it's a case of seeing is believing - bring it on HP!


And here's my tip for next buzz-phrase mania - "Data Centre In A Box"; you heard it here (if not) first...


Paradigm 13 Shift 2 SDN (lost count)

| No Comments | No TrackBacks
| More
Such was the count at the end of Day 1 of Netevents Portugal. Thirteen "paradigm's" and two "paradigm shifts". Surprisingly there were no "out of the boxes" and only one "granularity" reference. It should also be noted that the "p" word" was used by at least four different nationalities, so it's not a single country syndrome.

But the winner for "fully embraced buzz-phrase" has to be SDN or Software Defined Network, something we've spoken about in this blog on more than one occasion, including yesterday. The thing is, regardless of whether it is simply what network management should have been all along (or not), something really IS going on here - real products, services, open (OpenFlow) and proprietary (the rest) and pilot customers. It is, therefore, realistic to suggest that we are going to move into phase III of IT; mainframe, then networked and now SDN - i.e. fully separating the control and management of network traffic (users and application) from the physical components - switches, routers etc. For it to be truly worthwhile, SDN has to enable us to manage the network on a per user, per application, per connection (end to end) basis.

Is this feasible? Certainly. Does anyone have a true, fully-working solution? Watch this space - at least until Wednesday, when I can reveal one client of Broadband-Testing who has all the components; now just let us loose on the testing of said solution...







IT Is Prawn Cocktail ?

| No Comments | No TrackBacks
| More
Bem Vindo from the Algarve, at the latest Netevents symposium.

One of my favourite topics in networking (and IT in general) is how often we revisit old "recipes". In the same way that prawn cocktail has become trendy again, so it is with networking and Netevents. Two panel debates in, we've already had seven "paradigms" (IT buzzword of the year, 1995) and several "visions" and a few "hype cycles".

Debate topics are pretty well predictable:

- BYOD
- SDN
-  Mobile + Cloud = opportunity or risk?
etc...

The focus of the BYOD debate (and let's face it, people have been bringing their personal laptop into work and copying data onto it to work on from home out of office hours since the early '90's) was device management and security. But is the real issue here not the device, but the kind of applications that people are using on them, and adopting and managing those?   In other words, at what point do applications such as Facebook become "enterprise" applications and how do we then manage those, rather than simply block them (and the devices themselves)?

Now we're onto the subject of SDN - Software Defined Networking. The panel talk is about automation, removing the need for manual administration, control of mixed vendor networks etc. Isn't this called vendor-independent Network Management - i.e. what all the net' management vendors in the early '90's set out to achieve? So, it didn't get there - will SDN?

The debate goes on...



 

Tech Trailblazers Update

| No Comments | No TrackBacks
| More
Just a quickie update to all you vendors with mega technology out there re: the Tech Trailblazer awards wot I blogged about earlier this summer.

Entry levels have proved (as did Top Gear) that you can't have too many awards competitions, and these are still open until the 12th September (for late birds, the early one has closed) in the following categories, just to remind you all:

  • Big Data Trailblazers
  • Cloud Trailblazers
  • Emerging Markets Trailblazers
  • Mobile Technology Trailblazers
  • Networking Trailblazers
  • Security Trailblazers
  • Storage Trailblazers
  • Sustainable IT Trailblazers
  • Virtualization Trailblazers
There's over a million dollars up for grabs, so well worth the entry. To do so, just go to:


Simple as...

Enhanced by Zemanta

At The End Of The Network

| No Comments | No TrackBacks
| More

One of the problems we've faced in trying to maximise throughput in the past has not been at the network - say WAN - level, but what happens once you get that (big) data off the network and try to store at the same speed directly onto the storage.

 

We saw this limitation, for example, last year, when testing with Isilon and Talon Data and using traditional storage technology - the 10gigabit line speeds we were achieving with the Talon Data just couldn't be sustained when transferring all that data onto the storage cluster. While we believe that regular SSD (Solid State Disk) technology would have provided a slight improvement, we still wouldn't have been talking end-to-end consistent, top-level performance.

 

So it's with some interest - to say the least - that I've started working with a US start-up, Constant Velocity Technology, that reckons it has the capability to solve exactly this problem. We're currently looking to put together a test with them:  http://johnpaulmatlick.wix.com/cvt-web-site-iii - and another "big data" high-speed transfer technology client of mine, Bitspeed, with a view to proving we can do 10Gbps, end-to-end, from disk to disk.

 

Even more interesting, this is happening in "Hollywood" in one of the big-name M&E companies there. However, if any of you reading this are server vendors, then please get in touch as we need a pair of serious servers (without storage) to assist with the project!

 

Life beyond networking...

Enhanced by Zemanta

Technology At What Prize?

| No Comments | No TrackBacks
| More
Just wanted to give everyone with a good tech idea up their bit of T-shirt that covers the upper arm - given that it is summer -) a heads up about a new IT ideas competition called Tech Trailblazers - www.techtrailblazers.com - organised by my (and many others) old PR mate, Rose Ross. Well, when I say "old" I mean, er, long standing...

So, the idea is - if you have a tech startup wot has got something truly interesting to offer in the current tick box fields such as clouds, emerging markets, virtualisation, sustainability and mobile, as well as "classics" such as networking, storage and security, then take a look at the website listed above and see if it makes sense to enter (go on, you know it does).

As one of the (many) judges, I will - of course - be open to casual bribes such as free lunches in Michelin-starred addresses while receiving The Full Monty as to why your tech is prize-worthy...  It's amazing how an excellent crab soufflé and a few glasses of Menetou Salon, or café gourmand and Tenareze Armagnac can heighten the understanding of new technologies. Someone should do a scientific investigation of the process. I'm happy to volunteer my services...

Anyway - I'll be updating on the competition as it develops - while continuing my focus on optimisation technologies that defeat the laws of physics - i.e. go beyond linespeed, starting with something called Constant Velocity Technology that I'm being given the low-down on this week. Watch this (virtual) space...


Hyperoptic 1 gigabit broadband, a user perspective

| No Comments | No TrackBacks
| More

In this guest blog post Computer Weekly blogger Adrian Bridgwater tries out a new 1 Gbps broadband service.

In light of the government's push to extend "superfast" broadband to every part of the UK by 2015, UK councils have reportedly been given £530m to help establish connections in more rural regions as inner city connectivity continues to progress towards the Broadband Delivery UK targets.

Interestingly, telecoms regulatory body Ofcom has defined "superfast" broadband as connection speeds of greater than 24 Mbps. But making what might be a quantum leap in this space is Hyperoptic Ltd, a new ISP with an unashamedly biased initial focus on London's "multiple-occupancy dwellings" as target market for its 1-gigabit per second fibre-based connectivity.

Hyperoptic's premium 1 gig service is charged at £50 per month, although a more modest 100 Mbps connectivity is also offered £25 per month. Lip service is also paid to a 20 Mbps at £12.50 per month contract for customers on a budget who are happy to sit just below the defined "superfast" broadband cloud base.

Hyperoptic's managing director Dana Pressman Tobak has said that there is a preconception that fibre optic is expensive and therefore cannot be made available to consumers. "At the same time, the UK is effectively lagging in our rate of fibre broadband adoption, holding us back in so many ways -- from an economic and social perspective. Our pricing shows that the power of tomorrow can be delivered at a competitive and affordable rate," she said.

Cheaper than both Virgin and BT's comparable services, Hyperoptic's London-based service and support crew give the company an almost cottage industry feel, making personal visits to properties to oversee installations as they do.
While this may be a far cry from Indian and South African based call centres, the service is not without its teething symptoms and new physical cabling within resident's properties is a necessity for those who want to connect.

Upon installation users will need to decide on the location of their new router, which may be near their front door if cabling has only been extended just inside the property. This will then logically mean that home connection will be dependent on a WiFi connection, which, at best, will offer no more than 70 Mbps as is dictated by the upper limit of the 802.11n wireless protocol.

Sharing the juice out

It is as this point that users might consider a gigabit powerline communications option to send the broadband juice around a home (or business for that matter) premises using the electric power transmission lines already hard wired into a home or apartment building.

Gigabit by name is not necessarily gigabit by nature in this instance unfortunately, despite this word featuring in many of these products' names, which is derived from the 10/100/1000 Mbps Ethernet port that they have inside.
If you buy a 1 gigabit powerline adapter today you'll probably notice the number 500 used somewhere in the product name - and this is the crucial number to be aware of here as this is a total made up of both upload and download speeds added together i.e. 250 Mbps is all you can realise from the total 1 gigabit you have installed at this stage via the powerline route.

Our tests show uplink and downlink speeds of roughly 180 Mbps were achieved in both directions using a new iMac running Apple Max OS X Lion. Similar results were replicated on a PC running Windows 7 64-bit version.

Image 1 Hyperoptic.jpgThe above image shows a wireless connection test while the below image shows a hard wired connection.

Image 2 Hyperoptic.jpgThese criticisms being levied, powerline manufacturers will no doubt expand their product lines to accommodate for speeds and standards which are the edge of this market's current delivery capabilities. Further to this, Hyperoptic's 180 Mbps via powerline is only a fraction of what you can experience if your cabling geography allows it -- and it is over seven times faster than Ofcom's "superfast" 24 Mbps target.

Hyperoptic's service also includes an option to port your existing phone line over to its lines, which takes between two to three weeks. The company asserts that it is capable of transferring your old phone number over to its service or supplying you with a new one, the former option taking slightly longer but at no extra cost.

So in summary

It would appear that some of Hyperoptic's technology is almost before its time, in a good way. After all, future proofing is no bad thing house design architects looking to place new cable structures in 'new build' properties and indeed website owners themselves are arguably almost not quite ready yet for 1 gigabit broadband.

As the landscape for broadband ancillary services and high performing transactions-based and/or HTML5-enriched websites now matures we may witness a "coming together" of these technologies. Hyperoptic says it will focus next on other cities outside of the London periphery and so the government's total programme may yet stay on track.







Optimisation Springs To Life

| No Comments | No TrackBacks
| More

It's been a busy old Spring so far - I'm still trying to get my head around the recession - IT is going bonkers, spending like the world is about to end (does somebody know something we don't?), every flight I take from wherever to wherever is full and when I take a few days off on the Spanish and SoF coastlines the places are packed.

The result is a lot of tests and reports to update on, which can be found on the www.broadband-testing.co.uk website as normal, for free download. Gartner said it at the start of the year, IDC has supported the argument and I'm in the thick of it - network optimisation that is, whether LAN, WAN, Cloud or inter-planetary. As a result, we've got two new reports up on L-B/ADC solution providers, Kemp and jetNEXUS. Both are going for the "you don't need to spend stupid money to optimise app delivery" angle and both succeed; however, the focus of the tests are quite different. With Kemp we showed that you can move from IPv4 to IPv6 and not take a performance hit at all - very impressive. With jetNEXUS we showed that you can d**k around with data at L7 as much as you want and still get great throughput, manipulating data as you wish with no programming skills required whatsoever. Could put a few people out of a job... no problem let them loose with sledgehammers to knock down my old home town of Wakefield so someone can rebuild it properly. What was it that John Betjeman said about Slough?

The same could be said of Vegas; since arriving back with what felt like pneumonia I've been in an "who's the most ill" competition with my HP mate Martin O'Brien who contracted several unpleasant things while were both out at Interop. Elton John had to cancel the rest of his Vegas shows because he contracted (the same?) respiratory problems. Well if it's good enough for Elton...

One of the things to come out of Interop meetings wot I have spoken about is the proposed testing of HPs (along with F5) Virtual Application Networking solution. What is interesting here is that the whole aspect of profiling network performance management on a per user, per application basis is to get that profile as accurate as possible in the first place. While HPs IMC management system (inherited from the 3Com acquisition) does some app monitoring, it doesn't go "all the way". But we know men (and women) who can... If you checkout the Broadband-Testing website, you'll also see a review of Centrix's WorkSpace products. With these you can take application monitoring down to the level of recording when a user logs into an app, how long they have it loaded for and even when they are actively using it or not. Now that IS the way to get accurate profiling; take note HP. Let the spending continue...

 

Coughing Up In Vegas

| No Comments | No TrackBacks
| More

Back from Interop and my 'beloved' Vegas from which I escaped just in time before being air-con'd to death  as my ongoing cough continues to remind me. Is it possible to sue "air"?

I don't know - maybe there are people out there (mainly the people who were "out there") who enjoy the delicious contrast of walking in from 42c temperatures into 15c, time and again, then in reverse, and the joy of being able to hear at least three different sorts of piped music at any one time, the exhilaration for the nostrils of seven or more simultaneous smells, 24 hours a day? Must be me being picky. I like my sound in stereo at least, but all coming from the same source...

Anyway  - reflections on the show itself; easy when there's less smoke and more mirrors AKA taking away the hype. What I found was a trend - that others at the show also confirmed - towards making best of breed "components" again, rather than trying to create a complete gizmo. For example, we had Vineyard Networks creating a DPI engine that it then bolts on to someone's hardware, such as Netronome's dedicated packet processing architecture, that then sits - for example - on an HP or Dell blade server. I like this approach - it's what people were doing in the early '90's; pushing the boundaries, making networking more interesting - more fun even - and simply trying to do something better.

There are simply more companies doing more "stuff" at the moment. Take a recently acquired client of mine who I met out there for the first time, Talari Networks, enabling link aggregation across multiple different service providers - not your average WanOp approach. A full report on the technology has just been posted on the Broadband-Testing website: www.broadband-testing.co.uk - so please go check it out. Likewise, a report from Centrix Software on its WorkSpace applications. Reading between the lines on what HP is able to do with its latest and greatest reinvention of networking - Virtual Application Networking or VAN - as we described on this blog last week, along with buddy F5 Networks, I reckon there is just one piece of the proverbial jigsaw missing and that is something that Centrix can most definitely provide with WorkSpace. The whole of VAN is based around accurately profiling user and application behaviour, combining the two - in conjunction with available bandwidth and other resource - to create the ideal workplace on a per user, per application basis at all times, each and every time they log into the network, from wherever that may be.

Now this means that you want the user/application behaviour modelling to be as accurate as possible, so your starting point has to be, to use a technical term much loved by builders, "spot on". Indeed, there is no measurement in the world more accurate than "spot on". While HPs IMC is able to provide some level of user and application usage analysis, I for one know that it cannot get down to the detailed level that Centrix WorkSpace can - identifying when a user loads up an application, whether that application is "active" or not during the open session and when that application is closed down... and that's just for starters. I feel a marriage coming on...

 

Enhanced by Zemanta

And Yet More SDN...

| No Comments | No TrackBacks
| More
I don't think I can remember a time - and this is saying something - when there were SO many hyper buzz-phrases in IT circulation as there are currently. Every cloud variant, Big Data, SDN...

So it's good for the system, soul and sensibility to get behind the hype and see what vendors are actually offering between the lines. At Interop Vegas yesterday (where the food and wine quality sank to new depths c/o some alleged Mexican resto - and we all know Mexico produces superb wines... I met up with IP Infusion, who have been around for a decade or so but are now attaching themselves to the SDN wave - but in a good way. Basically IP Infusion creates a software based multi-service delivery platform - and always has done. Just that it now has to call it SDN to be fashionable, but all the better that the guys got there years ago. Basically, the technology decouples the control and data plane, the network services from the network OS and hardware, protocol stack and applications - meaning it is very flexible; probably THE key word if we accept the whole cloud scenario. It also gave proof that Open Flow is being deployed; IP Infusion showed a demo with two networks set up with redundant paths; one using (the hateful) Spanning Tree and one using Open Flow - both with live video streaming (i.e. the classic demo!). Not only was the latter more robust but recovery time was less than half that of STA when we induced a failure (by using the high tech methodology of yanking a cable out).

What was interesting with all the vendors I saw yesterday at Interop is that they were all focused on providing one specific element, rather than a "box". Netronome - ultra fast processing hardware; Vineyard Networks, DPI engine to sit on, for example Netronome's hardware, Anue - the glue that sits between the network monitoring/test tools and the stuff what's being tested and makes sure it all gets optimised and automated. So there's definitely a trend going on here that takes us back to best of breed ingredients and the chance to pick n mix.

More from Interop later...

Enhanced by Zemanta

More Of That Software Defined Networking...

| No Comments | No TrackBacks
| More

Live from the home of tack - i.e. Vegas, the Blackpool of the desert but without the classiness...or piers - is the latest bombardment of SDN, er, ness, care of Interop 2012.

Starting with a direct follow-up to my last blog entry - HPs take on SDN, AKA VAN (ok - enough TLAs...) or Virtual Application Networks, the big question was, who was going to drive the VAN since HP doesn't have the whole solution to deliver it? The answer is F5 Networks. So, the idea is to being to deliver a completely optimised, end to end solution on a per user/per application basis by using templates to define every aspect of performance etc. Makes total sense, sounds too good to be true. So, what's the answer - test it of course; watch this space on that one.

Meantime, I'll be reporting in daily from the show - seeing lots of new (to me) vendors who, one way or t'other, are all ticking the SDN/Big Data/Cloud boxes.

It seems to me that we need to get back to basics with SDN so that people actually understand what it is. For example, there's a definite belief among some that it does away with hardware... Nice idea - so we have software that exists in a vacuum that somehow delivers traffic? There also seems to be confusion between different vendors SDN solutions and OpenFlow. For those wot don't know, here's what OpenFlow is - in a classical router or switch, the fast packet forwarding (data path) and the high level routing decisions (control path) occur on the same device. 

An OpenFlow Switch separates these two functions. The data path portion still resides on the switch, while high-level routing decisions are moved to a separate controller, typically a standard server. The OpenFlow Switch and Controller communicate via the OpenFlow protocol, which defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats.

The data path of an OpenFlow Switch presents a clean flow table abstraction; each flow table entry contains a set of packet fields to match, and an action (such as send-out-port, modify-field, or drop). When an OpenFlow Switch receives a packet it has never seen before, for which it has no matching flow entries, it sends this packet to the controller. The controller then makes a decision on how to handle this packet. It can drop the packet, or it can add a flow entry directing the switch on how to forward similar packets in the future.

In other words it provides one, open-standard methodology of optimising traffic, end-to-end, but it is not a solution in its own right, just a potential part of the action.

Whatever - the interesting theme here is that no one talks about MPLS any longer (well maybe apart from Cisco and Juniper that is) despite it still being THE methodology used to move all our data around the 'net and beyond. There are factions that stand for the WAN optimisation kills MPLS idea. And for good reason - but there's no overnight change here, given the gazillions invested in MPLS networks. It'll be interesting to see what the vendors here make of the situation, at least from a timeline perspective...

Meantime it's showtime, meaning a walk past a beach, complete with wave machine and hundreds of Americans trying to get skin cancer, in order to get to the exhibition halls - this is Vegas, after all.


Enhanced by Zemanta

What's Next To Virtualise? The Network Of Course...

| No Comments | No TrackBacks
| More

Wore my journalist hat yesterday to attend an HP update event on its ESSN division (don't worry about what the initials stand for, but N is for Networking...).

While not the key focus of yesterday's blurb, the key thing for me to take from the event was the company's very recent announcement that they are going into the VAN market; no - not competing with Transits, though you could say the network is "in transit" but Virtual Application Networks - all part of the current SDN or Software Defined Network movement. For many years HP (as Procurve) and others have been trying to crack the whole "end to end" optimisation problem. I've been trying to personally crack it using any number of vendor parts since 1999.... 

So, VAN is the latest attempt. The aim is to use preconfigured templates to characterise the network resources required to deliver an application to users - i.e. to enable consistent, reliable and repeatable deployment of cloud applications in minutes. An end-to-end control plane virtualises the network and enables programming of the physical devices to create multi-tenant, on-demand, topology and device-independent provisioning. The idea is to be completely open, so this isn't an HP closed shop solution; that that they have created it.

Speaking with one of HPs customers, Mark Bramwell of Wellcome Trust at the event, we both agreed that it sounds like the latest and greatest "smoke and mirrors", "too good to be true" solution BUT - if it works, then great - every user has optimised applications, on a per user, per application basis. So we both agreed - the only sensible option is for me to test it. Watch this space on that one...

Speaking yet further on the subject in a broader manner with Lars Koelendorf who heads up HP EMEAs mobile and wireless stuff, we agreed that the ideal way to rebuild a network is to start with IPv6; with so many addresses available, every user could have their own virtual IP address that IS their identity so, whatever client they are using and wherever they are, all the logic sits behind their VIP(v6) address and the HP VAN man is complete. They would, of course, drive applications faster across the network than any other user type...


Enhanced by Zemanta