Automation Ultimatum...

| No Comments
| More

IT - a job for life?

Possibly... just finished a meeting with an old IT mate, Mike Silvey of Moogsoft, and we were talking about how all the recent networking reinvention bollox has basically forced companies into investing in new technology, not least network management in its broadest sense, simply in order to make sense of the new PARADIGM -)

The reality is, regardless of whether the world needed Cloud, SDN, SD-WAN, FinTech, IoT etc, they've been landed with it, so someone/something has to manage it. Had a variation on said topic with Joel Dolisy of SolarWinds recently in London. We spoke about how everything and nothing changes simultaneously, from virtualisation to outsourcing and to, more critically, automation. Ah, the golden nugget - freeing up staff from fire-fighting to actually be pro-active in making their company better, whether it makes biscuits or sells petroleum. Way back in the 90s I was involved in network management automation projects and so it goes on in 2016. The question is, would true automation really lead to staff being freed up to be more productive, or would they simply be made redundant - in every sense? Well, what doesn't make sense is individuals spending hours a day on mundane admin, so automation has to happen and then we see the fallout... It is therefore important for the likes of SolarWinds to continue to pursue the automation quest - one day Rodney...

On the SolarWinds front (weather gag?) an interesting move from the guys is device specific dashboards for the likes of F5, Cisco and others, with an SDK also coming out. This might seem overkill, but it does make sense as, after all, network management software vendors are better at doing network management than the hardware vendors!

Back to the talk of reducing manual admin time, one new product I'm working with currently that takes networking back to its hexadecimal basics and then gives it a two-digits wave goodbye is from a company called CapStar Forensics. The idea here is to take the "Wireshark" PCAP world into the 21st century for real - i.e. digging deep and dirty is still fundamental to many IT engineers, but why spend days and weeks doing manual searches to find what you're looking for - tiny needles in Giant Haystacks is not an issue we should be wresting with (!) in 2016. So, CapStar adds a DPI engine and a huge library of search profiles into the equation. Early testing suggests that days can indeed be taken down to seconds, based on some cybersecurity related forensics. 

Definitely a "watch this space" moment...

Testing The Testers...

| No Comments
| More

This week I have to host a panel debate on "stress testing cloud applications and infrastructure" at  Netevents in Rome (I know - it's tough, but someone's got to do it...).

 One of the areas to cover is, well, how you do actually cover that kind of environment from a test perspective - e.g. engage thousands of what we used to call human beings to all use specific apps at certain times, or can we simulate that or... given that we live in a world of analytics - well, we always have done, just that now they are being collected and - funnily enough - analysed, is a lot of the hard work actually being done for us? I mentioned in my last blog that I recently met up with John Rakowski of AppDynamics, the Application Intelligence company in the Enterprise space (that's as in the type of business, not the starship  - well, not yet at least) and a couple of areas we talked about were application intelligence and unified monitoring. In other words, the ability to blanket monitor, so you are collecting all the data into a unified reporting mechanism, and the ability to understand the app's you're actually monitoring.

This is a gazillion miles away from the old methods of collecting and then sifting through Syslog files and other Data Centre consuming information logs, requiring several of those human being things again on hand to manual carry out this most exciting of tasks - finding the eNeedle in the data haystack. So, in one fell swoop you minimise costs, remove human error, and maximise visibility and the ability to pro-actively manage apps and services.

It's much the same kind of story in the world of network monitoring itself; I had a catch-up last year Savvius (the artist formerly known as WildPackets) and gone are the days when we had to search manually through disks-worth of Hex in order to find a particular packet identifier or character string for example. In its recent update, you can now correlate and analyse network data directly on the capture engine, as it happens, and it give you remediation advice too!

So, back to the original point - are these, let's call them "app and data visibility tools" actually doing the job of a specialist app/service product testing, er, product and, more worryingly, that of the product tester?

That'll be one to debate in Rome then! Bring on the pizza and Chianti (classico reserva, of course..) .


What To Do With The Extra Monday?

| No Comments
| More
NetMan/Security vendor, and now part of the Thoma Bravo empire (watch out China!) SolarWinds, AKA SW, has sent us a timely reminder that this leap year has resulted in Feb having an extra Monday - AKA today - so what to do with it and improve the life of an IT pro at the same time?

Apparently the answer is NOT to go down the pub.. How times have changed -) Well, Leon Adato, SW "Head Geek"doesn't actually state not going down the pub, simply that the extra Monday should be used for educating employees on cyber-security and how to take care of their IT equipment, so that doesn't exclude a pub session I guess? For those who think this is all passé, simply walk through a train and see how many people leave their laptop/smartphone unattended and open, while going for a coffee or to the loo. Adato also talks about the issues around poor network configuration still being a problem and this is something that was emphasised in a meeting last week with John Rakowski of AppDynamics and the importance of automating configuration management and performance management - more on this tomorrow... Going back to SWs Adato, he makes the point - one that is wholly relevant to the "open laptop on train" scenario - that "the best way to mitigate the risk of human error is to make staff aware of the impact their actions can have and put security at the heart of their responsibilities" - true, but another one is also to ensure compliance with all security elements, PCI and otherwise, something I've been speaking with the guys at NewNetTechnologies about.

So it seems, as Spring of 2016 is close to arriving, that the education aspects of IT security are as relevant (and lacking) as ever. At the same time, with vendors trying to disrupt the market - cloud vs endpoint anybody? - the IT/security admin manager is understandably in an increasing state of confusion - something I spoke of in depth in a recent meeting with Sophos's Chet Wisniewski, highlighted in a recent blog.

SWs Adato is therefore largely hitting the right notes in his extra-curricular Monday security advice column, However, it's just something towards the end of the missive that makes me think maybe he has been down the pub after all -) 

He talks, rightly, about how often the excuse for having a weak password is because a user can't remember a "difficult" one. So here's his suggestion to avoid the mixed-case (not wine) and numerals nightmare we are often confronted with - and that is, in his words: "to set a fun challenge of coming up with four random words and joining them up, and creating an image in your head to remember them all. For example, take these completely unrelated words 'flag, castle, dog and pizza.' On the surface these are four completely random words which would be difficult to remember but if you painted a picture in your mind of a dog eating a pizza in a castle with a flag, it will be much easier to remember."
Now we've all seen that one on Bognor beach haven't we... Sounds like one hell of a good drinking game though -)

It's A Software Defined Analytical World We Live In...

| No Comments
| More
Having recently finished some testing with Cirba on its SDI (Software Defined Infrastructure) approach to compute and storage resource management and app deployment, it's been interesting to be simultaneously judging an early stage tech vendor competition and see just how many Cirba wannabe's there are out there!

It's almost as if IT shops have been forced to wander down the hybrid in-house/cloud combination and this naturally presents some management difficulties that the likes of Cirba are having to resolve. The two key words here are automation and analytics - and this is not purely a storage-related scenario. Having met up with Sophos research guru Chet Wisniewski last week, the security world is also analytics-mad; "the signature-based" approach is dead, long live analytics... Well it is true that the signature-based "matching" approach hasn't been relevant for years, but the big argument here seems to be whether to analyse at the endpoint or at the edge, in the cloud... surely it's all of these?

Whatever, the state of IT currently is that yer proper IT manager is confused as hell. Understandable - pick 10 vendors as randomly as you can, go to their websites and try and work out exactly what it is they are offering? I think in many cases, the hype onslaught has caused existing vendors to panic and react with a rash of TLAs (official term!) with little or no substance to them. Meantime, new wave vendors (mainly made up of guys who have been around the block a few times already) are simply a jumping on the TLA bandwagon and SDE  - Software Defined Everything...

But is IT really in a better place than it was 10, 20 or 30 years ago? "Automation", "Analytics", Virtualisation", "Outsourcing" in whatever terminology you want to call it (there are now more IT words for "outsourcing" than the Eskimo's have for "snow"), capacity planning (choose your term). DevOps = "Agile IT" = when we moved apps dev to the PC from the mainframe in the 80s,and took dev times down from nine months to a weekend! 

And so forth... new terms, same old problems!

Acquisitions And Reinventions Continued...

| No Comments
| More

Much of the talk on this blog recently has involved the world of acquisitions. And so it goes on.


A recent meeting with Phil and Jim from the UK and US arms of Netscout respectively focused partially on their completion of the acquisition of the comms business of Danaher Corp - Danaher is an enormous company; I can imagine a conversation in parts of that company where one employee in a different division comments on "we've just sold our comms business" and another saying "did we have a comms business?" However - it's a very sizeable business in its own right and actually turns Netscout into kind of a "big" company.


This comes with obvious complications. I can remember a conversation with ex F5 SVP marketing and top geezer Erik Giesa years ago at F5s HQ in Seattle and he talked about (while using his arms to emphasise the size of the offices) how F5 had never intended becoming a "big company" - simply that no one had acquired them in time before their market cap became too big to make them attractive any longer.


That said, Netscout seems to have all the angles covered, even the Arbor Networks security oriented element of the acquisitiion - I mean, on the surface, how does cyber security fit into network monitoring? Surprisingly easily as it happens - a no-brainer if you think about at; incredibly valuable information being extracted from the network needs the most protection of all. SNMP v1 anybody? You can imagine all the IT guys who used it to death way back ow thinking: "what possessed me to send intimate details about the corporate network across completely open connections in plain text?" Of course, the Arbor element runs way deeper than that, but you get the message. It's as I have talked about in recent blogs, pretty well all IT companies are having to reinvent themselves, whether in networking or not. Hell - some people still think Dell only makes laptops!


It will, however, be interesting indeed to see how Netscout progresses as a company with a broad range of products and a massively increased customer base to keep happy. Good luck chaps!


Talking of mergers and acquisitions, just read that the green light has been given for BT and EE to get their act together  - when I'm in the UK I'm with BT on broadband and EE on mobile - what did I do to deserve this? For the record, the BT Homehub5 has surely the worst coverage of any WiFi router I've used in the past 15 years. It doesn't  even extend between two bedrooms (don't ask why I'm using this as the metric)! Time for signal booster acquisition, or a better router obviously, but that's the easy option. I don't do easy options...


Of Acquisitions And Integration

| No Comments
| More

The IT world has gone acquisition bonkers. Dell is paying as much for EMC as any football club would do to secure the combined services of Messi and Ronaldo. Well almost...

Meantime, a couple of companies I keep tabs on have also been in acquisition mode, albeit on a lesser, but still significant, scale, as the need to reinvent to stay in  - and play - the game is more critical than ever. The two companies I am speaking of are TIBCO and SolarWinds and I caught up with both of them last week in that tourist theme park known as London.

TIBCO is on a world tour - we didn't get the T-shirt however - and clearly it was a sell-out; barely standing room in the (large) conference room of the Landmark Hotel. Interesting to see that t'Interweb, while initially reducing physical presence at live events, seems to be a less significant distractor these days. Good job, as it's just taken me over two weeks to get a BT broadband line activated, and that with the help of the press office and the "Exec Level Complaints Dept"  (than you Lisa) - otherwise I would be still waiting until the middle of next week, or next year or... Meantime, BT has now issued four accounts on my behalf and I have three BT HomeHubs already. I digress...

Cloud integration is a key driver for TIBCO right now as evidenced by two releases last week - the snappily named TIBCO BusinessWorks Container Edition and the more direct TIBCO Cloud Integration (we likes "does what it says on the tin" descriptions my precious). The former is designed to get companies scaling the heights of the cloud as rapidly as possible, while the second is all about APIs - an IPaaS, AKA, integration Platform-as-a-Service, kind of a Platform as a Platform as a Service, if you like.  Both are worthy missions from a company whose origins were on the trading floors of the world, where clouds were not even visible. And interesting how the old and new come together - APIs and iPaaS (only one letter different between the two, note, maybe we should introduce an IT version of Countdown, along the lines of the 8/10 Cats variant?) - I still remember when APIs "were the future". Mind, so was 8-bit computing once...

Integration was also a key theme in my conversation with SolarWind's head of security, Mav Turner, who has featured in a previous CW article of mine on compliance. Switching to that subject briefly, I made the point to the TIBCO board that accelerating DevOps and Integration might lead to some compliance issues as dev gets too far ahead of compliance box-ticking? CTO Matt Quinn begged to differ, but Mav of SolarWind was in my camp. Obviously both vendors have vested interests (if neither tour vests nor T-shirts in TIBCOs case) but compliance really is a fundamental pain in the (word deleted here - Ed) process of implementing and delivering new services and applications these days. With Mav, we talked about how this is an even more spectacular problem when dealing with government departments - something I know only too well from chatting with MK Council earlier this year.

SolarWinds' focus was actually a combination of the two key themes here, acquisition and integration, in that they are currently bringing all their acquisitions together into a common interface and style (whatever happened to the days when you would test a "single" Cisco product and encounter three different management interfaces?)  - we even used the dreaded phrase "single pane of glass management". Oh how we laughed... On a more serious note - and this applies to every vendor that has done well enough to make a name for itself in one sphere, but moves forward into new worlds - Mav made the point that many people's association with SolarWinds is simply Network Management, or even more simply, SNMP.  People, the company HAS moved on...

This need for continual reinvention in the IT vendor world is frankly frustrating and driven largely by the analyst groups and stock exchanges in equal measure. TIBCO CEO, Murray Rode talked about how, in some ways, escaping the clutches of public ownership and moving back into private alleviates many of these pressures and allows a vendor to focus on all the important elements - bettering the product for the right reasons, customer focus etc - and he is absolutely right.  Why do we have to put up with the pressures that, for example, force the renaming Mainframe Time-Sharing to Outsourcing, then Application Service Provision, then Outsourcing again and now Cloud? Just adds to marketing costs and confusion.

Talking of confusion - so what exactly is Dell boy going to do with VMware? Odds-on favourite is to offload it but who would be the buyer? Please, not HP, surely... (whichever bit of HP that might be) and could Brocade afford it? Microsoft, to clean up on the hypervisors? IBM as a slightly left-field proposition? Mega-management buy-out? As ever, it's time to watch this space...

The Network Is Dead - Long Live The Network!

| 1 Comment
| More

I remember writing a column for a long since deceased IT publication, where I was discussing the rebirth of the mainframe as a network server, I remember it, not simply for the content, but more specifically the context, and how the sub-editors, in the name of formatting and pure ignorance, change my column title: "The mainframe is dead, long live the mainframe" (quite a witty variation I though!) to simply: "The mainframe is dead". Not quite the same meaning...

Fortunately, no one edits these blogs (normally!) so hopefully the title here stays as originally typed. The point is, the "mainframe" is being reinvented again, but this time it's not the mainframe itself that is regenerating itself, but the "network", so the exact inverse of what happened before.

This thought was reiterated in recent conversations with the excellent Danny Yeowell of Dimension Data (a man who can describe an incredibly complex company structure in simple layman's terms in the space of 90 seconds!) and in recent work I've started with Cirba, a company focusing on "software-defined infrastructure control solutions" AKA managing and optimising virtual storage through software. The point is, whatever a vendor means by SDN and NFV, what is happening is that network functionality is being distributed across the ether, as one giant set of components, manageable from a single, remote entity, whose applications are largely accessed through a browser, regardless of the access device - AKA the rebirth of the mainframe as an organic network, sitting as a software layer, controlling application and data access, wherever the apps and data reside.

As a fundamental consequence of this, storage and networking are becoming ever more integrated, hence the number of acquisitions in recent years of network technology by storage vendors and vice-versa. Looking at Cirba's solution, while focused on virtualised storage, networking elements such as workload routing, load-balancing and interfacing with NFV elements are all fundamental parts of the product.  Cirba's analytics automate VM routing decisions based on all the required constraints including workload utilisation, business, technical, software licensing, and complex storage requirements.

A side-effect of the likes of Cirba is that it means that the storage vendors don't have to do this stuff for themselves; they can simply integrate with a "Cirba" as exemplified by this week's announcement that Cirba is now integrated with NetApp's OnCommand Insight OCI), thus providing the aforementioned optimisation to NetApp customers. So the storage companies increasingly become part of the SDN/NFV movement - there is no escape!

Lest we forget to bring "cloud" into this blog, Cirba also provides cloud infrastructure management teams with visibility into when resource shortfalls might adversely affect associated VMs and where excess resources exist for - in this case - NetApp and other storage infrastructure connected to NetApp OCI.

On a more generic level, all this is being put to the test currently by yours truly, so look out for a report on the topic in the near future.


Shock - WiFi Actually Generating Revenue Streams

| No Comments
| More
I recently had the "pleasure" of visiting Milton Keynes; the railway station was packed with what were surely tourists - some mistake here? Admittedly, all looking in a hurry to get back to London... does "Stratford-Upon-Avon" translate aurally as "Milton Keynes" in some languages?

Anyway, point being MKC has gone into the world of WiFi as a service,c/o Tallac Networks a US-based start-up formed of ex HP guys (and there are quite a few of those these days!); a revenue generating service no less, guaranteed fixed costs, open API to build apps on and instant revenue stream options. And it works great; I went there and downloaded the app (left the T-shirt in the tourist information office though). Suffice to say, this is a long way from WiFi c. 2003, so check out the full study to get a new angle on how wireless technology now makes money, rather than just eating it: 

WiFi that a) works over a large area and b) generates dosh... Was I dreaming perchance?

Who Said Virtualisation Was Supposed To Simplify Things...

| No Comments
| More

So, we went from hardware to software, and then software to virtual, the idea being not only that everything is more efficient, but also easier to scale and manage. Kind of VLANs part two?


Or more like browsers Mk II? I remember visiting a company in Cambridge c.1837 (it feels that long ago anyway) and seeing Mosaic for the first time. I was impressed; so here is the future interface, lovely and simple, makes sense o't'Interweb. And then there was Netscape, and Mosaic became Firefox, and there was IE of course, then Chrome, Safari etc etc - and each iteration more complex than the last... So what happened to the simplicity of A browser?


And it's kind of become the same with virtualisation re: the clutter of Hypervisor's out there now. For example, Cirba, a company wot I've mentioned before in this 'ere blog, which focuses on capacity planning and improved performance/reduced VM "wastage", has announced it has added support for Linux's native KVM-based environments in OpenStack-based private clouds. This, in itself, is not the point. It means that Cirba - and others - are now having to support the likes of KVM, VMware, Citrix Xen, MS Hyper-V, IBM PowerVM, Red Hat Enterprise... where does it end?


I guess what it does mean, with yet another "simplification" turning into "complication" is that there is that much more requirement for products that optimise virtual environments. Andrew Hillier, CTO and co-Founder of Cirba, explained that the company enables organisations to balance infrastructure supply and application demand for greater efficiency and control in single hypervisor, multi-hypervisor, software-defined and hybrid cloud environments - What a lovely, simplistic IT world we now live in...


Not that this is putting companies off the move from physical to virtual. Nutanix, a company that goes from strength to strength, despite having the most baffling 'job description' - "a web-scale hyper-converged infrastructure company" - announced its most recent customer, and a very interesting one at that: Bravissimo, a lingerie retailer - high street and online presence - is taking the opportunity to end of life its physical servers and move to Nutanix's virtual computing platform - basically, integrated compute and storage management, which DOES make sense of course! Not so long ago women were burning their bras, and now they're being virtualised!


Back to the business angle from a Nutanix perspective... what it means is that what typically takes days and weeks to configure, and scales as well as an obese climber, is reduced to a trivial 30-60 minute exercise, AND, additional functionality and apps such as disaster recovery and replication become exactly that - just add-ons. I saw the same concept, pre-virtualisation, work extremely well with Isilon, and they did just fine being acquired by EMC a few years ago. But even Nutanix has to support several different Hypervisor platforms...


Welcome to the world of IT!

Blasts From The Past

| No Comments
| More

Been doing a few catch-ups with old vendor friends recently; one was Brocade - and more of this next month - which has a REAL SDN/NFV story - and another was NetScout; networking monitoring!!!! Except that network monitoring now goes way beyond SNMP probes and Sniffers.

Speaking with NetScout's Phil Gray, who came into the company with the acquisition of Psytechnics, which had a voice/video monitoring technology, two things became abundantly clear in terms of where network monitoring/analysis has been going since the days of analysing millions of lines of packet captures:

- A key requirement is getting rid of all the superfluous data automatically, so searches are focused purely on relevant information. Contrast this with the old days of spending hours looking for one hexadecimal string as the proverbial digital needle in an OSI haystack...

2.    - Gauging/measuring by end user experience, not some theoretical mathematical improvement, is network monitoring for 2015. Importantly, the voice and video element of network data monitoring is, of course, more relevant than ever. Another point is that this traffic has to be captured at speeds and feeds from basic Internet level through to 40Gbps and above. This is not trivial stuff, just as identifying and preventing security attacks isn't. Yet the world of network monitoring doesn't get the mass media hype that security does but, at the same time, the key word that always comes up in IT conversations is "visibility". The bottom line is that traffic monitoring and analysis was important in 1988 and it's even more important - and relevant - here in 2015. Whether it's data trending, app performance management or simple data feed analysis, if you don't know what's actually on the network, how do you manage it?



Shock - HP Acts On Advice - Five Years Too Late...

| No Comments
| More
So - HP has just announced it is acquiring Aruba Networks; basically the 2nd-3rd stab at buying a wireless solution after Colubris and effectively inheriting additional WLAN tech with the 3Com acquisition (that NOT being the raison d'etre for that acquisition).

The daft thing here is that, after doing some internal test comparison work for HP towards the end of 2009, the recommendation was - "definitely cheaper and quicker to acquire Aruba than incorporate that capability into your own technology".

So x 2, of course, they didn't - and then went and acquired Colubris, on the basis of wanting an Enterprise WLAN solution (AKA Aruba) except Colubris had been designed as a hospitality/hotel etc type solution. The question is, how much would Aruba have cost in 2009 - $3bn? More to the point here, Aruba these days has many OEM deals in place (but not back then) with key HP rivals, most notably Dell boys. So, what happens there? And we thought Putin was going to be at the heart of WW III...

On the plus side, this HAS to be a better acquisition than Autonomy -)

As a footnote - HP (as Procurve's) original WLAN portfolio was based on licensing a subset of Symbol's WLAN product range, a company that Motorola acquired a few years ago. So why not simply acquire Symbol before Motorola did, and then add Aruba back in 2009? But then, what do I know about comparing products and best acquisition practise?

Cloud Innovation and Groundhog Day Combined...

| No Comments
| More

Within the general misty definition of "Cloud", sometimes something pokes through the veil of ether-precipitation that says "I'm new and I make sense".

And typically, it's not a variation on that other "Somehow Defines Nothing" Hype-TLA that is SDN, but more akin to the style of Python "And now for something completely different". In this case it comes from a UK start-up Fedr8. Ok, so the name sounds more like a courier company, but stick with me...

Rather than focusing on Cloud storage or performance, Fedr8 is focusing on making sure your existing applications will actually work in that environment in the first place. Kind of akin to avoiding the scenario where you buy a large American car before you measure the size of your garage. The product itself, Argentum, provides compatibility analysis and optimisation for in-house applications, prior to cloud delivery.  It provides organisations with a suite of tools that can assess, analyse and optimise existing applications, enabling organisations to design successful cloud projects and migrate applications without even thinking about the pain, system, effort and time in attempting to do it manually. Or simply guessing...

To date Argentum has been piloted on Open Source applications developed by companies including Netflix, Twitter and IBM, so no big names there then! How, then, does it work? In layman's terms it analyses the source code of any application, in any programming language, and then provides actionable intelligence to help a company move those existing apps into the cloud - hence "federate" the services! So, what's in a name? Lots it seems -) At a slightly more technical level, code is uploaded to the Argentum platform where it undergoes a complex analysis and is split into objectified tokens. These tokens populate a meta database against which queries are run. From this, out pops a visualisation of the application and actionable business intelligence to enable successful cloud adoption.

Sounds great in theory, and looks a must for Broadband-Testing to put through its paces; not least because there is a Groundhog Day moment here. Yes, the product is innovative, BUT there is an eerie resemblance to that of a former client, AppDNA, whose product analysed applications for migration between Microsoft OSs and browser versions. So, same concept, different application (in every sense) and, indeed, why not? Especially since AppDNA ultimately got acquired by Citrix for more than a few quid. Now that's a precedent I suspect the Fedr8 board will be quite sweet on...

It's CA Jim But Not As We Know IT

| No Comments
| More
I was at a Gartner event in Barcelona last week, where Computer Associates were playing host.

Not only was there the amusement of the excellent and straight-talking EMEA CTO Bjarne Rasmussen referencing Forrester several times at a Gartner event (especially amusing for my Forrester analyst mate Nikki Babatola when I told her) but what followed was even better.

Beamed in on live video from the simultaneous CA World event in Vegas - AND IT WORKED! - we had CA CEO Mike Gregoire do a keynote on the new CA - and how right he is, this is an unrecognisable company from a few years back (and in a GOOD way) - and then the maddest panel debate ever. Hosted by Wired's Jason Tanz, it featured Mike himself, Biz Stone (co-founder of Twitter) and Jennifer "Legs" Hyman (co-founder of Rent The Runway clothes rental online).
How could this work? Well, Biz and Jennifer both looked like they'd been on something in the Green Room beforehand, but... they were all brilliant. Mike was comfortable in this company in a way that no senior exec from "old" CA would ever have been; Biz was the master of (laid back) common sense and Jennifer represented the new face of IT - i.e. not tech talk but pure business.

So what was a mainframe software company is now an Apps company...

The CA strap-line throughout was that your business is software and it's hard to disagree with this, short of someone making pottery from their own home and selling direct from the doorstep!

With a few of the established IT and networking giants flounder currently (you know who you are guys!) CA is a good example of how you can reinvent yourself for the new IT economy; business not tech - about time too! Even if it does put a few of us out of business...

Networking Innovation?

| No Comments
| More
Been researching an article for CW's very own Cliff Saran while, by chance, also speaking with a number of IT investors - the research being on networking innovations; oh and, by another chance, also judging an awards event, networking category and also visited a Cambridge Wireless innovations awards event...

That's a lot of potential networking innovations to witness; except, networking ain't what it used to be - much of the new development is WRT to tangential elements to networking, rather than at the heart of it all. Does this mean that networking is essentially done and dusted? That it all just... works?

Obviously a lot of the focus is on "the cloud" (everyone looks up to the sky for some reason when you say that) and end-to-end optimisation, especially at the NAS/storage end, which is fine, but not much else to report on that's genuinely new and genuinely "networking".

That said, a few things are being properly reinvented:

1. Network Management - to cope with cloudy, hybrid networks.
2. User Interfaces - finally getting less "IT and more "human being" - examples I've tested recently here include jetNEXUS Load-Balancing/ADC - once a mega-techie product area and now simplified to such an extent that we're going to ask my mum to configure the next test (you think I'm kidding?) - and also WRT Sunrise Software's Sostenuto ITSM platform - now truly "admin person friendly" and with whacko new features (at least by "Helpdesk" software standards) such as gamification (both reports are on the broadband-testing, website so check 'em out).
3. WiFi - well, not so much re-invented but now with real scalability - this stuff really does work, even if hotels still try to prove the complete opposite...

And, of course, we have the latest set of "router" replacement technologies, but now we're talking optical tech, not exactly branch-office stuff...

And, finally, why ARE investors currently so obsessed with "Apps"? I mean, for every one that makes zillions...

Shopping In A WiFi Cloud...

| No Comments
| More
"Cloud" and "WLAN" or "WiFi" are not, to date, IT terms that are typically seen in tandem, but Tallac Systems, a new venture for a number of ex HP er, yes - let's say it - veterans (I reckon I can out-run you guys if necessary) are looking to create one from t'other.

Some of the basic target scenarios here - for example, a multi-tenant building scenario, or a shopping mall equivalent - are not new; we tested this kind of application with the likes of Trapeze Networks  a decade or so ago - but the way in which Tallac is approaching this kind of solution, IS different. Have a look at the Tallac architecture, for example, to get an idea if where the boys are coming from, combining elements of SDN (OpenFlow) with Tallac's own virtualisation model and open APIs to boot:

Of course, every start-up has to have a new variant on a marketing spin; in this case it's SDM (not N, this was not a typo) or Software Defined Mobility. Get beyond the marketing BLX and the basics make sense:

  • Manage entire Wi-Fi network from a single dashboard
  • Control multi-tenant Wi-Fi networks, applications and devices
  • Application-based virtual networks
  • Cost effective 3rd party hardware
  • OpenFlow enabled API
So, the next step for me is to get some hands-on experience and the "seeing is believing" proof point. Watch this space...

Next Gen Network Management

| No Comments
| More
IT and networking reinvents itself partially or wholly every few years, and here we are again now - distributed, virtualised, cloud (private and public) based, hybrid networks... So where does this leave traditional Network Management (NetMan) applications?

Every few years, the topic of "next generation" NetMan crops up once more and here we are again right now, thanks to the the distributed, somewhat cloudy and virtualised nature of contemporary network deployments. I mean, just how do you manage a network "entity" if you don't know where it is?

I recently finished testing with a genuinely fascinating start-up called Moogsoft (the report is available from the website) who also featured in an article I did recently for Computer Weekly on managing external, virtualised networks (it's on the CW website somewhere!). The work really did bring home to me just HOW different it is attempting to manage a global network in 2014, compared to even 10 years ago. What seems now way back in '99 with the emergence of network optimisation products and security I tried to set up the NGNMF - Next Generation Network Management Forum - with a view to getting the network management specialists - the BMCs, CAs, HPs IBM Tivoli's etc of this world - to outline how they would advance their software solutions to cope with the - then - new generation of networks being deployed.

15 years on and that task is almost infinitesimally more challenging. I was speaking last week with Dan Holmes, director of product management at CA, about this very topic. Dan has historically been through many of the phases of network management development as myself, in his case starting with the - then Cabletron's Spectrum manager, one of the first products to try and bring AI into the equation for resolving networking problems. Dan acknowledged that the change in networking infrrastructure is requiring a change of approach by all the major NetMan vendors and pointed to a lack of standards as being just one of many issues to resolve. He described the fundamental difference now as being, whereas NetMan was previously focused from the inside looking out, now it has to refocus from the outside looking in; in other words, the starting point is the bigger picture and you have to drill down to individual services, threads, user conversations, transactions... That's a hell of a lot of "data" to manage - application monitoring and similar tools can do certain tasks, but they are not the complete answer - checkout the Moogsoft report to see how radical a solution is seemingly required in order to be that latter-day solution. 

And if that means the major NetMan vendors are all playing catch-up currently, it'll be interesting to see how fast each or all of them can adapt to the new generation of networks being rapidly built out there in the ether at the moment...

Finally Solving Network and Storage Capacity Planning?

| No Comments
| More

Interesting how some elements of IT seem to be around forever without being cracked.

remember working with a couple of UK start-ups in the 90s on network, server and application capacity planning and automation of resource allocation etc - and the problem was that the rate of change always exceeded our capabilities to keep up with. Moving into the virtualised world just seemed to make the trick even harder.

Now, I'm not sure if IT has slowed down (surely not!) or whether the developers are simply getting smarter, but there do seem to be solutions around now to do the job. Latest example is from CiRBA - where the idea is to enable a company to see the true amount of server and storage resources required versus the amount that is currently allocated by application, department or operating division in virtualised and cloud infrastructures, not simply in static environments. The result? Better allocation and infrastructure decisions, reducing risk and eliminating over-provisioning, at least if they use it correctly!

If it resolves the ever-lasting issue of over-provisioning and the $$$$ that goes with that, then praise be to the god of virtualisation... Who's called what exactly?  So the idea with CiRBA's new, and snappily titled, Automated Capacity Control software, is to actively balance capacity supply with application demand by providing complete visibility into server, storage and network capacity based on both existing and future workload requirements.  The software is designed to accurately determine the true capacity requirements based on technical, operational and business policies as well as historical workload analysis, all of which is required to get the correct answer pumped out at the end.

So, bit by bit, it looks like we're cracking the 2014 virtualised network management problem. Look out for an article by me on deploying and managing distributed virtualised networks in CW in the near future...


Enhanced by Zemanta

Benchmarking Integration...

| No Comments
| More

Just finished some testing at test equipment partner Spirent's offices in glamorous Crawley with client Voipex - some fab results on VoIP optimization so watch this space for the forthcoming report - and it made me think just how different testing is now.

In the old days of Ethernet switch testing and the like, it was all very straightforward. Now, however, we're in the realms of multiple layers of software all delivering optimisation of one form or another, such as the aforementioned Voipex, but equally with less obviously benchmarked elements such as business flow processes. Yet we really do need to measure the impact of software in these areas in order to validate the vendor claims. 

One example is with TIBCO - essentially automating data processing and business flows across all parts of the networks (so we're talking out to mobile devices etc) in real-time. Data integration has always been a fundamental problem - and requirement - for companies, both in terms of feeding data to applications and to physical devices, but now that issue is clearly both more fundamental, and more difficult, than ever in our virtual world of big data = unorganised chaos in its basic form.

TIBCO has just launched the latest version of its snappily-named ActiveMatrix BusinessWorks product and the company claims that it massively increases the speed with which new solutions can be configured and deployed, a single platform to transform lots of data into efficiently delivered data and lots of other good stuff. In a Etherworld that is made up of thousands of virtual elements now, and that is constantly changing in topology, this is important stuff.

As TIBCO put it themselves, "Organisations are no longer just integrating internal ERP, SaaS, custom or legacy applications; today they're exposing the data that fuels their mobile applications, web channels and open APIs." Without serious management and optimisation that's a disaster waiting to happen.

 Just one more performance test scenario for me to get my head around then....


Enhanced by Zemanta

Reinventing Network Management

| No Comments
| More
Network management is the proverbial bus syndrome - nothing shows up forever and then there's a whole queue of them  - in this case interesting technologies, but here's the really interesting one I'm about to start testing with - Moogsoft.

Think radical invention of network management and you're getting there - the name and website give some clues I guess that this isn't mainstream, me-too stuff... 

So here's the problem - network management, even in its modern incarnation of application performance monitoring and all the variations on a theme, is all about some concept of a network configuration being stable and predictable. So the idea is that you, over time, build up a "rich database" of information regarding all elements of the network - hardware and software - so that there's a level of intelligence to access when identifying problems (and potential problems). OK, except that, if you have a network deployment that is part cloud (or managed service of some description), part-virtualised and otherwise outsourced to some extent, how can you possibly know what the shape of that network is. Even as the service provider you cannot... 

It therefore doesn't matter how much networking data you collect  - effectively you have to start from scratch every time, because the network is dynamic, not static, so any historical data is not necessarily correct. And we all know what happens in you make decisions based on inaccurate data... Moogsoft therefore says, forget about the existing methodologies - they don't work any longer. Instead it uses algorithm-based techniques to establish concurrent relationships between all aspects of the network when an anomalous situation is identified - looking at every possible cause-effect possibility. And it works in a true, collaborative environment - after all, network management is not detached from other aspects of the network in the same way that user A in department X is not detached from user B in department Y. So every "element" of the "network" is relevant and involved.

Sounds like an impossible to scale scenario? Well how about handling 170m incidents a day? And taking resolution time down from hours and days to minutes? Sounds too good to be true? Maybe so, but these are recorded results involving a famous, global Internet player.

Watch out for the Broadband-Testing report on the Moogsoft solution - should be somewhat interesting!!!!

Enhanced by Zemanta

Trouble-Ticketing - The Future Is Here!

| No Comments
| More
IT doesn't so much go round in circles as overlapping rings - think the Olympic sign. That's to say, it does repeat itself but with additions and a slightly changed working environment each time.

So, for the past few decades we've had good old Helpdesk, Trouble-Ticketing and related applications in every half-decent sized network across the globe; incredibly conservative, does what it says on the tin applications, typically created by incredibly conservative, does what it says on the tin ISVs. Nowt wrong with that, but nothing to get excited about either.

However, in a chat last week with the guys from Autotask at a gig in Barca - these guys have been around for over a decade, building business steadily... until recently, that is, since when they've been expanding faster than the average Brit's waistline (and that's some expansion rate!). 

So why? How can a humdrum, take it for granted network app suddenly become "sexy"? Speaking with Mark Cattini, CEO of Autotask, a couple of points immediately make things clearer. One is that we have that rare example of a Brit in charge of a US company (since three years ago), and a Brit who's seen it all from both sides of the fence, pond and universe. So he understands the concept of "International". Secondly, we have an instance of a product having been written from day one as a SaaS application, long before SaaS was invented - think about the biz flow product I've spoken about (and tested) many times here - Thingamy - and it's the same story, just a complimentary app that is all part of the "bigger picture".

The cloud, being forced on the IT world, is perfect for the likes of Autotask. It gives them the deployment and management flexibility that enables a so-called deluxe trouble-ticketing and workflow app to become a fundamental tool for the day-to-day running of a network (and a business) on a global scale. I was talking the other day with another ITSM client of mine, Laurence Coady of Richmond Systems, and he was saying how the cloud has enabled the company's web-enabled version of its ITSM suite to go global from an office in Hampshire, with virtually no sales and marketing costs involved, thanks to the likes of Amazon's cloud.

Mark Cattini spoke about his pre-Autotask days including a long stint in International sales with Lotus Notes. I made the point that Notes created an entire sub-industry with literally thousands of apps designed specifically to work with and support Notes - almost a pre-Internet Internet. While it seems absurd to say that something as "long-winded" as ITSM-related products can become the next Notes, think about it in a business/workflow perspective within a cloud infrastructure, given that there are open APIs to all this software (so anyone can join in) and given that no one really knows what "big data" is and we have a genuine infrastructure for building the next generation networks on - real software defined networking in other words!

Enhanced by Zemanta

Have you entered our awards yet?


-- Advertisement --