It's CA Jim But Not As We Know IT

| No Comments | No TrackBacks
| More
I was at a Gartner event in Barcelona last week, where Computer Associates were playing host.

Not only was there the amusement of the excellent and straight-talking EMEA CTO Bjarne Rasmussen referencing Forrester several times at a Gartner event (especially amusing for my Forrester analyst mate Nikki Babatola when I told her) but what followed was even better.

Beamed in on live video from the simultaneous CA World event in Vegas - AND IT WORKED! - we had CA CEO Mike Gregoire do a keynote on the new CA - and how right he is, this is an unrecognisable company from a few years back (and in a GOOD way) - and then the maddest panel debate ever. Hosted by Wired's Jason Tanz, it featured Mike himself, Biz Stone (co-founder of Twitter) and Jennifer "Legs" Hyman (co-founder of Rent The Runway clothes rental online).

http://www.ca.com/us/caworld14/live.aspx
How could this work? Well, Biz and Jennifer both looked like they'd been on something in the Green Room beforehand, but... they were all brilliant. Mike was comfortable in this company in a way that no senior exec from "old" CA would ever have been; Biz was the master of (laid back) common sense and Jennifer represented the new face of IT - i.e. not tech talk but pure business.

So what was a mainframe software company is now an Apps company...

The CA strap-line throughout was that your business is software and it's hard to disagree with this, short of someone making pottery from their own home and selling direct from the doorstep!

With a few of the established IT and networking giants flounder currently (you know who you are guys!) CA is a good example of how you can reinvent yourself for the new IT economy; business not tech - about time too! Even if it does put a few of us out of business...



Networking Innovation?

| No Comments | No TrackBacks
| More
Been researching an article for CW's very own Cliff Saran while, by chance, also speaking with a number of IT investors - the research being on networking innovations; oh and, by another chance, also judging an awards event, networking category and also visited a Cambridge Wireless innovations awards event...

That's a lot of potential networking innovations to witness; except, networking ain't what it used to be - much of the new development is WRT to tangential elements to networking, rather than at the heart of it all. Does this mean that networking is essentially done and dusted? That it all just... works?

Obviously a lot of the focus is on "the cloud" (everyone looks up to the sky for some reason when you say that) and end-to-end optimisation, especially at the NAS/storage end, which is fine, but not much else to report on that's genuinely new and genuinely "networking".

That said, a few things are being properly reinvented:

1. Network Management - to cope with cloudy, hybrid networks.
2. User Interfaces - finally getting less "IT and more "human being" - examples I've tested recently here include jetNEXUS Load-Balancing/ADC - once a mega-techie product area and now simplified to such an extent that we're going to ask my mum to configure the next test (you think I'm kidding?) - and also WRT Sunrise Software's Sostenuto ITSM platform - now truly "admin person friendly" and with whacko new features (at least by "Helpdesk" software standards) such as gamification (both reports are on the broadband-testing,co.uk website so check 'em out).
3. WiFi - well, not so much re-invented but now with real scalability - this stuff really does work, even if hotels still try to prove the complete opposite...

And, of course, we have the latest set of "router" replacement technologies, but now we're talking optical tech, not exactly branch-office stuff...

And, finally, why ARE investors currently so obsessed with "Apps"? I mean, for every one that makes zillions...




Fear, Certainty and Less Doubt

| No Comments | No TrackBacks
| More
Just back from the latest Netevents industry symposium in Portugal; I won't bore you with the details of the journey from hell to get there (31hrs for an alleged 11hr journey including getting to know Lisbon airport and its indigenous, overnight mosquito population rather too well) but, suffice, to say, it was more than worth the effort to get there.

Yes, there was the usual tsunami of TLAs - SDN (of course), NFV (Network Function Virtualisation, if you really care!), OTT, MEF, IDC, ITV, BBC..) but this time around there was some meaning being attached to the acronyms - not just meaning, but - dare I say it - actual product. So, that no two vendors really agree on what SDN means is largely irrelevant; the important point is that they are building solutions that a) resolve the issues of going from a private network, to cloud (public or private) and back again (and all stops in between) and b) theoretically - and genuinely - enable us to dump those dreaded router-based networks (sorry Cisco, Juniper etc) at some point. There are two devices on a network that give us hell - routers and printers (sorry HP etc). Smartphones are replacing the need for printers (airport check-in etc) and so an "SDN" based solution negates the need for routing - kind of NGR really (you can work that one out for yourselves).

Had a great time onstage in a Security panel debate with my fine friend Jan Guldentops and the affable Jordi from CA, chaired by the equally affable Bernt Ostergaard (Danish, but granted the freedom of Yorkshire as he lived there as a kid) - Jan was keen to point out that, regardless of whether outsourcing ("clouding"?) security or keeping it in-house - or combining the two - the key element to said security strategy is still good old-fashioned common sense. We had already debated over breakfast whether SaaS should  - in this case - stand for Security as a Service, when I hit on the NEXT BIG THING - CSaaS, or Common Sense as a Service - you heard it here first!

So that covers the certainty and less doubt, what about the fear? Well - was I the only one who found the MEF (Metro Ethernet Forum) presentation, Bob Metcalfe and all, of "The Third Network" just a little on the scary side? You sensed that, during the part-live/part real-time video launch, that they might be pulsing "thoughts" into your brainwaves. Or maybe they're just not very good actors...

And, finally, "Paradigm Shift" is still alive and well. Twenty-five years and still going strong!


Shopping In A WiFi Cloud...

| No Comments | No TrackBacks
| More
"Cloud" and "WLAN" or "WiFi" are not, to date, IT terms that are typically seen in tandem, but Tallac Systems, a new venture for a number of ex HP er, yes - let's say it - veterans (I reckon I can out-run you guys if necessary) are looking to create one from t'other.

Some of the basic target scenarios here - for example, a multi-tenant building scenario, or a shopping mall equivalent - are not new; we tested this kind of application with the likes of Trapeze Networks  a decade or so ago - but the way in which Tallac is approaching this kind of solution, IS different. Have a look at the Tallac architecture, for example, to get an idea if where the boys are coming from, combining elements of SDN (OpenFlow) with Tallac's own virtualisation model and open APIs to boot:

http://www.tallac.com/architecture

Of course, every start-up has to have a new variant on a marketing spin; in this case it's SDM (not N, this was not a typo) or Software Defined Mobility. Get beyond the marketing BLX and the basics make sense:

  • Manage entire Wi-Fi network from a single dashboard
  • Control multi-tenant Wi-Fi networks, applications and devices
  • Application-based virtual networks
  • Cost effective 3rd party hardware
  • OpenFlow enabled API
So, the next step for me is to get some hands-on experience and the "seeing is believing" proof point. Watch this space...




Next Gen Network Management

| No Comments | No TrackBacks
| More
IT and networking reinvents itself partially or wholly every few years, and here we are again now - distributed, virtualised, cloud (private and public) based, hybrid networks... So where does this leave traditional Network Management (NetMan) applications?

Every few years, the topic of "next generation" NetMan crops up once more and here we are again right now, thanks to the the distributed, somewhat cloudy and virtualised nature of contemporary network deployments. I mean, just how do you manage a network "entity" if you don't know where it is?

I recently finished testing with a genuinely fascinating start-up called Moogsoft (the report is available from the broadband-testing.co.uk website) who also featured in an article I did recently for Computer Weekly on managing external, virtualised networks (it's on the CW website somewhere!). The work really did bring home to me just HOW different it is attempting to manage a global network in 2014, compared to even 10 years ago. What seems now way back in '99 with the emergence of network optimisation products and security I tried to set up the NGNMF - Next Generation Network Management Forum - with a view to getting the network management specialists - the BMCs, CAs, HPs IBM Tivoli's etc of this world - to outline how they would advance their software solutions to cope with the - then - new generation of networks being deployed.

15 years on and that task is almost infinitesimally more challenging. I was speaking last week with Dan Holmes, director of product management at CA, about this very topic. Dan has historically been through many of the phases of network management development as myself, in his case starting with the - then Cabletron's Spectrum manager, one of the first products to try and bring AI into the equation for resolving networking problems. Dan acknowledged that the change in networking infrrastructure is requiring a change of approach by all the major NetMan vendors and pointed to a lack of standards as being just one of many issues to resolve. He described the fundamental difference now as being, whereas NetMan was previously focused from the inside looking out, now it has to refocus from the outside looking in; in other words, the starting point is the bigger picture and you have to drill down to individual services, threads, user conversations, transactions... That's a hell of a lot of "data" to manage - application monitoring and similar tools can do certain tasks, but they are not the complete answer - checkout the Moogsoft report to see how radical a solution is seemingly required in order to be that latter-day solution. 

And if that means the major NetMan vendors are all playing catch-up currently, it'll be interesting to see how fast each or all of them can adapt to the new generation of networks being rapidly built out there in the ether at the moment...







Finally Solving Network and Storage Capacity Planning?

| No Comments | No TrackBacks
| More

Interesting how some elements of IT seem to be around forever without being cracked.

remember working with a couple of UK start-ups in the 90s on network, server and application capacity planning and automation of resource allocation etc - and the problem was that the rate of change always exceeded our capabilities to keep up with. Moving into the virtualised world just seemed to make the trick even harder.

Now, I'm not sure if IT has slowed down (surely not!) or whether the developers are simply getting smarter, but there do seem to be solutions around now to do the job. Latest example is from CiRBA - where the idea is to enable a company to see the true amount of server and storage resources required versus the amount that is currently allocated by application, department or operating division in virtualised and cloud infrastructures, not simply in static environments. The result? Better allocation and infrastructure decisions, reducing risk and eliminating over-provisioning, at least if they use it correctly!

If it resolves the ever-lasting issue of over-provisioning and the $$$$ that goes with that, then praise be to the god of virtualisation... Who's called what exactly?  So the idea with CiRBA's new, and snappily titled, Automated Capacity Control software, is to actively balance capacity supply with application demand by providing complete visibility into server, storage and network capacity based on both existing and future workload requirements.  The software is designed to accurately determine the true capacity requirements based on technical, operational and business policies as well as historical workload analysis, all of which is required to get the correct answer pumped out at the end.

So, bit by bit, it looks like we're cracking the 2014 virtualised network management problem. Look out for an article by me on deploying and managing distributed virtualised networks in CW in the near future...

 

Enhanced by Zemanta

Benchmarking Integration...

| No Comments | No TrackBacks
| More

Just finished some testing at test equipment partner Spirent's offices in glamorous Crawley with client Voipex - some fab results on VoIP optimization so watch this space for the forthcoming report - and it made me think just how different testing is now.

In the old days of Ethernet switch testing and the like, it was all very straightforward. Now, however, we're in the realms of multiple layers of software all delivering optimisation of one form or another, such as the aforementioned Voipex, but equally with less obviously benchmarked elements such as business flow processes. Yet we really do need to measure the impact of software in these areas in order to validate the vendor claims. 

One example is with TIBCO - essentially automating data processing and business flows across all parts of the networks (so we're talking out to mobile devices etc) in real-time. Data integration has always been a fundamental problem - and requirement - for companies, both in terms of feeding data to applications and to physical devices, but now that issue is clearly both more fundamental, and more difficult, than ever in our virtual world of big data = unorganised chaos in its basic form.

TIBCO has just launched the latest version of its snappily-named ActiveMatrix BusinessWorks product and the company claims that it massively increases the speed with which new solutions can be configured and deployed, a single platform to transform lots of data into efficiently delivered data and lots of other good stuff. In a Etherworld that is made up of thousands of virtual elements now, and that is constantly changing in topology, this is important stuff.

As TIBCO put it themselves, "Organisations are no longer just integrating internal ERP, SaaS, custom or legacy applications; today they're exposing the data that fuels their mobile applications, web channels and open APIs." Without serious management and optimisation that's a disaster waiting to happen.

 Just one more performance test scenario for me to get my head around then....

 

Enhanced by Zemanta

Reinventing Network Management

| No Comments | No TrackBacks
| More
Network management is the proverbial bus syndrome - nothing shows up forever and then there's a whole queue of them  - in this case interesting technologies, but here's the really interesting one I'm about to start testing with - Moogsoft.

Think radical invention of network management and you're getting there - the name and website give some clues I guess that this isn't mainstream, me-too stuff... 

So here's the problem - network management, even in its modern incarnation of application performance monitoring and all the variations on a theme, is all about some concept of a network configuration being stable and predictable. So the idea is that you, over time, build up a "rich database" of information regarding all elements of the network - hardware and software - so that there's a level of intelligence to access when identifying problems (and potential problems). OK, except that, if you have a network deployment that is part cloud (or managed service of some description), part-virtualised and otherwise outsourced to some extent, how can you possibly know what the shape of that network is. Even as the service provider you cannot... 

It therefore doesn't matter how much networking data you collect  - effectively you have to start from scratch every time, because the network is dynamic, not static, so any historical data is not necessarily correct. And we all know what happens in you make decisions based on inaccurate data... Moogsoft therefore says, forget about the existing methodologies - they don't work any longer. Instead it uses algorithm-based techniques to establish concurrent relationships between all aspects of the network when an anomalous situation is identified - looking at every possible cause-effect possibility. And it works in a true, collaborative environment - after all, network management is not detached from other aspects of the network in the same way that user A in department X is not detached from user B in department Y. So every "element" of the "network" is relevant and involved.

Sounds like an impossible to scale scenario? Well how about handling 170m incidents a day? And taking resolution time down from hours and days to minutes? Sounds too good to be true? Maybe so, but these are recorded results involving a famous, global Internet player.

Watch out for the Broadband-Testing report on the Moogsoft solution - should be somewhat interesting!!!!

Enhanced by Zemanta

Trouble-Ticketing - The Future Is Here!

| No Comments | No TrackBacks
| More
IT doesn't so much go round in circles as overlapping rings - think the Olympic sign. That's to say, it does repeat itself but with additions and a slightly changed working environment each time.

So, for the past few decades we've had good old Helpdesk, Trouble-Ticketing and related applications in every half-decent sized network across the globe; incredibly conservative, does what it says on the tin applications, typically created by incredibly conservative, does what it says on the tin ISVs. Nowt wrong with that, but nothing to get excited about either.

However, in a chat last week with the guys from Autotask at a gig in Barca - these guys have been around for over a decade, building business steadily... until recently, that is, since when they've been expanding faster than the average Brit's waistline (and that's some expansion rate!). 

So why? How can a humdrum, take it for granted network app suddenly become "sexy"? Speaking with Mark Cattini, CEO of Autotask, a couple of points immediately make things clearer. One is that we have that rare example of a Brit in charge of a US company (since three years ago), and a Brit who's seen it all from both sides of the fence, pond and universe. So he understands the concept of "International". Secondly, we have an instance of a product having been written from day one as a SaaS application, long before SaaS was invented - think about the biz flow product I've spoken about (and tested) many times here - Thingamy - and it's the same story, just a complimentary app that is all part of the "bigger picture".

The cloud, being forced on the IT world, is perfect for the likes of Autotask. It gives them the deployment and management flexibility that enables a so-called deluxe trouble-ticketing and workflow app to become a fundamental tool for the day-to-day running of a network (and a business) on a global scale. I was talking the other day with another ITSM client of mine, Laurence Coady of Richmond Systems, and he was saying how the cloud has enabled the company's web-enabled version of its ITSM suite to go global from an office in Hampshire, with virtually no sales and marketing costs involved, thanks to the likes of Amazon's cloud.

Mark Cattini spoke about his pre-Autotask days including a long stint in International sales with Lotus Notes. I made the point that Notes created an entire sub-industry with literally thousands of apps designed specifically to work with and support Notes - almost a pre-Internet Internet. While it seems absurd to say that something as "long-winded" as ITSM-related products can become the next Notes, think about it in a business/workflow perspective within a cloud infrastructure, given that there are open APIs to all this software (so anyone can join in) and given that no one really knows what "big data" is and we have a genuine infrastructure for building the next generation networks on - real software defined networking in other words!


Enhanced by Zemanta

Big Data = Big Transfer Speeds?

| No Comments | No TrackBacks
| More
So - there's been all this talk about Big Data and how it's replaced classic transactional processing in many application instances.

This much is true - what hasn't been discussed much, however, is how this impacts on performance; big data - say digital video - has hugely different transfer characteristics to transactional processing and it simply doesn't follow that supplying "big bandwidth" means "big performance" for "big data" transfers.

For example - I'm currently consulting on a project in the world of Hollywood media and entertainment, where the name of the game is transferring digital video files as quickly (and accurately - all must be in sequence) as possible. The problem is that, simply providing a 10Gig pipe doesn't mean you can actually fill it!

We've proved with tests in the past that latency, packet loss and jitter all have very significant impacts on bandwidth utilisation as the size of the network connection increases.

 

For example, when we set up tests with a 10Gbps WAN link and round trip latencies varying from 50ms to 250ms to simulate national and international (e.g. LA to Bangalore for the latter) connections, we struggled to use even 20% of the available bandwidth in some cases with a "vanilla" - i.e. non-optimised - setup.

 

Current "behind closed doors" testing is showing performance of between 800KBps- 1GBps (that's gigabYTEs) on a 10Gbps connection but we're looking to  improve upon that.

 

We're also asking the question - can you even fill a pipe when the operational environment is ideal - i.e. low latency and minimal jitter and packet loss for TCP traffic? - and the answer is absolutely not necessarily; not without some form of optimisation, that is.


Obviously, some tweaking of server hardware will provide "some" improvement, but not significant in testing we've done in the past. Adam Hill, CTO of our client Voipex, offered some advice here:


"The bottom line is that, in this scenario, we are almost certainly facing several issues. The ones which ViBE (Voipex's technology) would solve ( and probably the most likely of their problems ) are:

 

1) It decouples the TCP throughput from the latency and jitter component by controlling the TCP congestion algorithm itself rather than allowing the end user devices to do that.

 

2) It decouples the MTU from that of the underlying network, so that MTU sizes can be set very large on the end user devices regardless of whether the underlying network supports such large MTUs."


Other things to consider are frame, window and buffer sizes, relating to whichever specific server OS is being used (this is a fundamental of TCP optimisation), but thereafter we really are treading on new ground. Which is fun, After all, the generation of WanOp products that have dominated for the past decade were not designed with 10Gbps+ links in mind.


Anyway - this is a purely "to set the ball rolling" entry and I welcome all responses, suggestions etc, as we look to the future and filling 40Gbps and 100Gbps pipes  - yes they will arrive in the mainstream at some point in the not massively distant future!   


Enhanced by Zemanta






Anything New For 2013?

| No Comments | No TrackBacks
| More
So here we are, already several weeks into 2013 and is there anything new to report on the networking front?

Not really - currently the same stories as we've been hearing for the past year or two - SDN, Cloud etc.... I am, at least, about to put some element of cloud to the test with Aryaka - WanOp as a cloud-based service. More details on this shortly, but we will be testing it as a regular customer; i.e. remote login via the Internet etc, so this will be a true user-style test case.

Also just finished some repeat testing with an old client, Voipex - the company has always had an excellent VoIP optimisation story but now it has added lots of data networking functionality that gives it a very different angle to the default WanOp players. The report will be appearing shortly on the Broadband-Testing website.

Meantime, back to the world of SDN etc - is anyone really buying into it properly at the moment, rather than just a bit of toe-dipping with OpenFlow etc? That question applies equally to end users and vendors... Or are we simply in another of those eras of solutions seeking problems?

Answers on the back of a hybrid real/virtual postcard in a dropbox at the end of your 'net connection!


Enhanced by Zemanta

Did You Know That HP Is A Networking Company?

| No Comments | No TrackBacks
| More
Thus it has always been so - HP networking, AKA ProCurve in the "old days", has been a success in spite of its "parent" company and, today, amidst the doom and gloom financial results the company has posted, and all the Autonomy naming, blaming and shaming going on, I couldn't help but help notice three little but significant words in one paragraph of the story in Microscope - Note - paragraph from one of the many HP/Autonomy stories around, focusing on the doom and gloom of HP losing money "everywhere", but note the three magic words I've highlighted in the snippet below. See if you can spot them...

"In its day-to-day business, HP revealed it had had another predictably awful quarter at Personal Systems, with revenue down 14% as the unit fought for its piece of the ever-shrinking PC market. Printing sales were down 5%, Services declined 6% and ESSN declined 9%, with growth in Networking offset by shrinkage in Industry Standard Servers and Storage, while Business Critical Servers dropped 25%."

Nothing changes...


Enhanced by Zemanta

Talari - It's Not Channel Bonding!

| No Comments | No TrackBacks
| More

Some of you may have seen earlier blogs, and even the Broadband-Testing report, on our recently acquired US client Talari Networks, whose technology basically lets you combine multiple broadband Internet connections (and operators) to give you the five-nine's levels of reliability (and performance) associated with them damnedly expensive MPLS-based networks, for a lot less dosh.

 You can actually connect up to eight different operators, though according to Talari, this was not enough for one potential customer who said "but what if all eight networks go down at the same time?" Would dread having to provide the budget for that bloke's dinner parties - "yes I know we've only got four guests, but I thought we should do 24 of each course, just in case there's a failure or two..."

Anyway - one potential issue (other than paranoia) for some was the entry cost; not crazy money but not pennies either. So, it makes sense for Talari to move "up" in the world, so that the relative entry cost is less significant and that's exactly what they've done with the launch of the high(er)-end Talari Mercury T5000 - a product designed for applications such as call centres that have the utmost requirements for reliability and performance and where that entry cost is hugely insignificant once it saves a few outages; or even just the one.

If you still haven't got wot they do, in Talari-ese it provides "end-to-end QoS across multiple, simultaneous, disparate WAN networks, combining them into a seamless constantly monitored secure virtual WAN".  Or, put another way, it gives you more resilience (and typically more performance) than an MPLS-based network for a lot lower OpEx.

So where exactly does it play? The T5000 supports bandwidth aggregation up to 3.0Gbps upstream/3.0 Gbps downstream across, of course, up to eight WAN connections. It also acts as a control unit for all other Talari appliances, including the T510 for SOHO and small branch offices, and the T730, T750 and T3000 for large branch offices and corporate/main headquarters, for up to 128 branch connections.

I's pretty flexible then, and just to double-check, we're going to be let loose on the new product in the new year, so watcheth this space...


Enhanced by Zemanta

Physical Software Defined Networking...

| No Comments | No TrackBacks
| More

Following on from last week's OD of SDN at Netevents, we have some proper, physical (ironically) SDN presence in the launch of an SDN controller from HP.

 

This complete the story I covered this summer of HPs SDN solution - the Virtual Application Network - which we're still hoping to test asap. Basically the controller gives you an option of proprietary or open (OpenFlow), or both.

 

The controller, according to the HP blurb, moves network intelligence from the hardware to the software layer, giving businesses a centralised view of their network and a way to automate the configuration of devices in the infrastructure. In addition, APIs will be available, so that third-party developers can create enterprise applications for these networks. HPs own examples include Sentinel Security - a product for network access control and intrusion prevention and some Virtual Cloud Networks software, which will enable cloud providers to bring to market more automated and scalable public-cloud services.

 

Now it's a case of seeing is believing - bring it on HP!


And here's my tip for next buzz-phrase mania - "Data Centre In A Box"; you heard it here (if not) first...


Paradigm 13 Shift 2 SDN (lost count)

| No Comments | No TrackBacks
| More
Such was the count at the end of Day 1 of Netevents Portugal. Thirteen "paradigm's" and two "paradigm shifts". Surprisingly there were no "out of the boxes" and only one "granularity" reference. It should also be noted that the "p" word" was used by at least four different nationalities, so it's not a single country syndrome.

But the winner for "fully embraced buzz-phrase" has to be SDN or Software Defined Network, something we've spoken about in this blog on more than one occasion, including yesterday. The thing is, regardless of whether it is simply what network management should have been all along (or not), something really IS going on here - real products, services, open (OpenFlow) and proprietary (the rest) and pilot customers. It is, therefore, realistic to suggest that we are going to move into phase III of IT; mainframe, then networked and now SDN - i.e. fully separating the control and management of network traffic (users and application) from the physical components - switches, routers etc. For it to be truly worthwhile, SDN has to enable us to manage the network on a per user, per application, per connection (end to end) basis.

Is this feasible? Certainly. Does anyone have a true, fully-working solution? Watch this space - at least until Wednesday, when I can reveal one client of Broadband-Testing who has all the components; now just let us loose on the testing of said solution...







IT Is Prawn Cocktail ?

| No Comments | No TrackBacks
| More
Bem Vindo from the Algarve, at the latest Netevents symposium.

One of my favourite topics in networking (and IT in general) is how often we revisit old "recipes". In the same way that prawn cocktail has become trendy again, so it is with networking and Netevents. Two panel debates in, we've already had seven "paradigms" (IT buzzword of the year, 1995) and several "visions" and a few "hype cycles".

Debate topics are pretty well predictable:

- BYOD
- SDN
-  Mobile + Cloud = opportunity or risk?
etc...

The focus of the BYOD debate (and let's face it, people have been bringing their personal laptop into work and copying data onto it to work on from home out of office hours since the early '90's) was device management and security. But is the real issue here not the device, but the kind of applications that people are using on them, and adopting and managing those?   In other words, at what point do applications such as Facebook become "enterprise" applications and how do we then manage those, rather than simply block them (and the devices themselves)?

Now we're onto the subject of SDN - Software Defined Networking. The panel talk is about automation, removing the need for manual administration, control of mixed vendor networks etc. Isn't this called vendor-independent Network Management - i.e. what all the net' management vendors in the early '90's set out to achieve? So, it didn't get there - will SDN?

The debate goes on...



 

Tech Trailblazers Update

| No Comments | No TrackBacks
| More
Just a quickie update to all you vendors with mega technology out there re: the Tech Trailblazer awards wot I blogged about earlier this summer.

Entry levels have proved (as did Top Gear) that you can't have too many awards competitions, and these are still open until the 12th September (for late birds, the early one has closed) in the following categories, just to remind you all:

  • Big Data Trailblazers
  • Cloud Trailblazers
  • Emerging Markets Trailblazers
  • Mobile Technology Trailblazers
  • Networking Trailblazers
  • Security Trailblazers
  • Storage Trailblazers
  • Sustainable IT Trailblazers
  • Virtualization Trailblazers
There's over a million dollars up for grabs, so well worth the entry. To do so, just go to:


Simple as...

Enhanced by Zemanta

At The End Of The Network

| No Comments | No TrackBacks
| More

One of the problems we've faced in trying to maximise throughput in the past has not been at the network - say WAN - level, but what happens once you get that (big) data off the network and try to store at the same speed directly onto the storage.

 

We saw this limitation, for example, last year, when testing with Isilon and Talon Data and using traditional storage technology - the 10gigabit line speeds we were achieving with the Talon Data just couldn't be sustained when transferring all that data onto the storage cluster. While we believe that regular SSD (Solid State Disk) technology would have provided a slight improvement, we still wouldn't have been talking end-to-end consistent, top-level performance.

 

So it's with some interest - to say the least - that I've started working with a US start-up, Constant Velocity Technology, that reckons it has the capability to solve exactly this problem. We're currently looking to put together a test with them:  http://johnpaulmatlick.wix.com/cvt-web-site-iii - and another "big data" high-speed transfer technology client of mine, Bitspeed, with a view to proving we can do 10Gbps, end-to-end, from disk to disk.

 

Even more interesting, this is happening in "Hollywood" in one of the big-name M&E companies there. However, if any of you reading this are server vendors, then please get in touch as we need a pair of serious servers (without storage) to assist with the project!

 

Life beyond networking...

Enhanced by Zemanta

Technology At What Prize?

| No Comments | No TrackBacks
| More
Just wanted to give everyone with a good tech idea up their bit of T-shirt that covers the upper arm - given that it is summer -) a heads up about a new IT ideas competition called Tech Trailblazers - www.techtrailblazers.com - organised by my (and many others) old PR mate, Rose Ross. Well, when I say "old" I mean, er, long standing...

So, the idea is - if you have a tech startup wot has got something truly interesting to offer in the current tick box fields such as clouds, emerging markets, virtualisation, sustainability and mobile, as well as "classics" such as networking, storage and security, then take a look at the website listed above and see if it makes sense to enter (go on, you know it does).

As one of the (many) judges, I will - of course - be open to casual bribes such as free lunches in Michelin-starred addresses while receiving The Full Monty as to why your tech is prize-worthy...  It's amazing how an excellent crab soufflé and a few glasses of Menetou Salon, or café gourmand and Tenareze Armagnac can heighten the understanding of new technologies. Someone should do a scientific investigation of the process. I'm happy to volunteer my services...

Anyway - I'll be updating on the competition as it develops - while continuing my focus on optimisation technologies that defeat the laws of physics - i.e. go beyond linespeed, starting with something called Constant Velocity Technology that I'm being given the low-down on this week. Watch this (virtual) space...


Hyperoptic 1 gigabit broadband, a user perspective

| No Comments | No TrackBacks
| More

In this guest blog post Computer Weekly blogger Adrian Bridgwater tries out a new 1 Gbps broadband service.

In light of the government's push to extend "superfast" broadband to every part of the UK by 2015, UK councils have reportedly been given £530m to help establish connections in more rural regions as inner city connectivity continues to progress towards the Broadband Delivery UK targets.

Interestingly, telecoms regulatory body Ofcom has defined "superfast" broadband as connection speeds of greater than 24 Mbps. But making what might be a quantum leap in this space is Hyperoptic Ltd, a new ISP with an unashamedly biased initial focus on London's "multiple-occupancy dwellings" as target market for its 1-gigabit per second fibre-based connectivity.

Hyperoptic's premium 1 gig service is charged at £50 per month, although a more modest 100 Mbps connectivity is also offered £25 per month. Lip service is also paid to a 20 Mbps at £12.50 per month contract for customers on a budget who are happy to sit just below the defined "superfast" broadband cloud base.

Hyperoptic's managing director Dana Pressman Tobak has said that there is a preconception that fibre optic is expensive and therefore cannot be made available to consumers. "At the same time, the UK is effectively lagging in our rate of fibre broadband adoption, holding us back in so many ways -- from an economic and social perspective. Our pricing shows that the power of tomorrow can be delivered at a competitive and affordable rate," she said.

Cheaper than both Virgin and BT's comparable services, Hyperoptic's London-based service and support crew give the company an almost cottage industry feel, making personal visits to properties to oversee installations as they do.
While this may be a far cry from Indian and South African based call centres, the service is not without its teething symptoms and new physical cabling within resident's properties is a necessity for those who want to connect.

Upon installation users will need to decide on the location of their new router, which may be near their front door if cabling has only been extended just inside the property. This will then logically mean that home connection will be dependent on a WiFi connection, which, at best, will offer no more than 70 Mbps as is dictated by the upper limit of the 802.11n wireless protocol.

Sharing the juice out

It is as this point that users might consider a gigabit powerline communications option to send the broadband juice around a home (or business for that matter) premises using the electric power transmission lines already hard wired into a home or apartment building.

Gigabit by name is not necessarily gigabit by nature in this instance unfortunately, despite this word featuring in many of these products' names, which is derived from the 10/100/1000 Mbps Ethernet port that they have inside.
If you buy a 1 gigabit powerline adapter today you'll probably notice the number 500 used somewhere in the product name - and this is the crucial number to be aware of here as this is a total made up of both upload and download speeds added together i.e. 250 Mbps is all you can realise from the total 1 gigabit you have installed at this stage via the powerline route.

Our tests show uplink and downlink speeds of roughly 180 Mbps were achieved in both directions using a new iMac running Apple Max OS X Lion. Similar results were replicated on a PC running Windows 7 64-bit version.

Image 1 Hyperoptic.jpgThe above image shows a wireless connection test while the below image shows a hard wired connection.

Image 2 Hyperoptic.jpgThese criticisms being levied, powerline manufacturers will no doubt expand their product lines to accommodate for speeds and standards which are the edge of this market's current delivery capabilities. Further to this, Hyperoptic's 180 Mbps via powerline is only a fraction of what you can experience if your cabling geography allows it -- and it is over seven times faster than Ofcom's "superfast" 24 Mbps target.

Hyperoptic's service also includes an option to port your existing phone line over to its lines, which takes between two to three weeks. The company asserts that it is capable of transferring your old phone number over to its service or supplying you with a new one, the former option taking slightly longer but at no extra cost.

So in summary

It would appear that some of Hyperoptic's technology is almost before its time, in a good way. After all, future proofing is no bad thing house design architects looking to place new cable structures in 'new build' properties and indeed website owners themselves are arguably almost not quite ready yet for 1 gigabit broadband.

As the landscape for broadband ancillary services and high performing transactions-based and/or HTML5-enriched websites now matures we may witness a "coming together" of these technologies. Hyperoptic says it will focus next on other cities outside of the London periphery and so the government's total programme may yet stay on track.