Recently in Broadband Category
In this guest blog post Computer Weekly blogger Adrian Bridgwater tries out a new 1 Gbps broadband service.
In light of the government's push to extend "superfast" broadband to every part of the UK by 2015, UK councils have reportedly been given £530m to help establish connections in more rural regions as inner city connectivity continues to progress towards the Broadband Delivery UK targets.
Interestingly, telecoms regulatory body Ofcom has defined "superfast" broadband as connection speeds of greater than 24 Mbps. But making what might be a quantum leap in this space is Hyperoptic Ltd, a new ISP with an unashamedly biased initial focus on London's "multiple-occupancy dwellings" as target market for its 1-gigabit per second fibre-based connectivity.
Hyperoptic's premium 1 gig service is charged at £50 per month, although a more modest 100 Mbps connectivity is also offered £25 per month. Lip service is also paid to a 20 Mbps at £12.50 per month contract for customers on a budget who are happy to sit just below the defined "superfast" broadband cloud base.
Hyperoptic's managing director Dana Pressman Tobak has said that there is a preconception that fibre optic is expensive and therefore cannot be made available to consumers. "At the same time, the UK is effectively lagging in our rate of fibre broadband adoption, holding us back in so many ways -- from an economic and social perspective. Our pricing shows that the power of tomorrow can be delivered at a competitive and affordable rate," she said.
Cheaper than both Virgin and BT's comparable services, Hyperoptic's London-based service and support crew give the company an almost cottage industry feel, making personal visits to properties to oversee installations as they do.
While this may be a far cry from Indian and South African based call centres, the service is not without its teething symptoms and new physical cabling within resident's properties is a necessity for those who want to connect.
Upon installation users will need to decide on the location of their new router, which may be near their front door if cabling has only been extended just inside the property. This will then logically mean that home connection will be dependent on a WiFi connection, which, at best, will offer no more than 70 Mbps as is dictated by the upper limit of the 802.11n wireless protocol.
Sharing the juice out
It is as this point that users might consider a gigabit powerline communications option to send the broadband juice around a home (or business for that matter) premises using the electric power transmission lines already hard wired into a home or apartment building.
Gigabit by name is not necessarily gigabit by nature in this instance unfortunately, despite this word featuring in many of these products' names, which is derived from the 10/100/1000 Mbps Ethernet port that they have inside.
If you buy a 1 gigabit powerline adapter today you'll probably notice the number 500 used somewhere in the product name - and this is the crucial number to be aware of here as this is a total made up of both upload and download speeds added together i.e. 250 Mbps is all you can realise from the total 1 gigabit you have installed at this stage via the powerline route.
Our tests show uplink and downlink speeds of roughly 180 Mbps were achieved in both directions using a new iMac running Apple Max OS X Lion. Similar results were replicated on a PC running Windows 7 64-bit version.
So in summary
It would appear that some of Hyperoptic's technology is almost before its time, in a good way. After all, future proofing is no bad thing house design architects looking to place new cable structures in 'new build' properties and indeed website owners themselves are arguably almost not quite ready yet for 1 gigabit broadband.
As the landscape for broadband ancillary services and high performing transactions-based and/or HTML5-enriched websites now matures we may witness a "coming together" of these technologies. Hyperoptic says it will focus next on other cities outside of the London periphery and so the government's total programme may yet stay on track.
Back from Interop and my 'beloved' Vegas from which I escaped just in time before being air-con'd to death as my ongoing cough continues to remind me. Is it possible to sue "air"?
I don't know - maybe there are people out there (mainly the people who were "out there") who enjoy the delicious contrast of walking in from 42c temperatures into 15c, time and again, then in reverse, and the joy of being able to hear at least three different sorts of piped music at any one time, the exhilaration for the nostrils of seven or more simultaneous smells, 24 hours a day? Must be me being picky. I like my sound in stereo at least, but all coming from the same source...
Anyway - reflections on the show itself; easy when there's less smoke and more mirrors AKA taking away the hype. What I found was a trend - that others at the show also confirmed - towards making best of breed "components" again, rather than trying to create a complete gizmo. For example, we had Vineyard Networks creating a DPI engine that it then bolts on to someone's hardware, such as Netronome's dedicated packet processing architecture, that then sits - for example - on an HP or Dell blade server. I like this approach - it's what people were doing in the early '90's; pushing the boundaries, making networking more interesting - more fun even - and simply trying to do something better.
There are simply more companies doing more "stuff" at the moment. Take a recently acquired client of mine who I met out there for the first time, Talari Networks, enabling link aggregation across multiple different service providers - not your average WanOp approach. A full report on the technology has just been posted on the Broadband-Testing website: www.broadband-testing.co.uk - so please go check it out. Likewise, a report from Centrix Software on its WorkSpace applications. Reading between the lines on what HP is able to do with its latest and greatest reinvention of networking - Virtual Application Networking or VAN - as we described on this blog last week, along with buddy F5 Networks, I reckon there is just one piece of the proverbial jigsaw missing and that is something that Centrix can most definitely provide with WorkSpace. The whole of VAN is based around accurately profiling user and application behaviour, combining the two - in conjunction with available bandwidth and other resource - to create the ideal workplace on a per user, per application basis at all times, each and every time they log into the network, from wherever that may be.
Now this means that you want the user/application behaviour modelling to be as accurate as possible, so your starting point has to be, to use a technical term much loved by builders, "spot on". Indeed, there is no measurement in the world more accurate than "spot on". While HPs IMC is able to provide some level of user and application usage analysis, I for one know that it cannot get down to the detailed level that Centrix WorkSpace can - identifying when a user loads up an application, whether that application is "active" or not during the open session and when that application is closed down... and that's just for starters. I feel a marriage coming on...
"M" might stand for Murder in the London theatre world, but the ultimate "M" word in IT has to be "Migration".
Apply this word to the challenge that is moving from IPv4 to IPv6 and you can probably hear the howls of despair and mistake them for an attempted murder. There are, however, some fundamental tools/advanced features of IPv6 that are designed to ease this process. These have been adopted to a lesser or greater degree by different vendors, so it's worth noting the availability of these features when shopping around for IPv6 assistance and future proofing.
We'll start with three absolutely fundamental ways to manage your IP addresses and how these work in a migratory environment.
NAT: NAT (Network Address Translation) has became a pretty fundamental tool for alleviating the issues with limited IPv4 address spaces, with most companies enabling it on their network gateways and other devices. So how to transition this to IPv6. First, there is what is known as Carrier Grade NAT (AKA Large Scale NAT) whereby Carriers/ISPs can allocate multiple clients to a single IPv4 address, standardising behaviour for IPv4 NAT devices and the applications running over them, using features such as "fairness" mechanisms - user allocated port quotas and the like.
We also have specific transition technologies such as NAT 64. This is a mechanism to allow IPv6 hosts to communicate with IPv4 servers. The NAT64 server is the endpoint for at least one IPv4 address and an IPv6 network segment of 32-bits. The IPv6 client embeds the IPv4 address it wishes to communicate with using these bits, and sends its packets to the resulting address. The NAT64 server then creates a NAT mapping between the IPv6 and the IPv4 address, allowing them to communicate.
DNS: As with the 64-bit version of NAS, we also have a 64-bit version of DNS. The IPv6 end user's DNS requests are received by the DNS64 device, which resolves the requests.
If there is an IPv6 DNS record (AAAA record), then the resolution is forwarded to the end user and they can access the resource directly.
If there is no IPv6 address but there is an IPv4 address (A record), then DNS64 converts the A record into an AAAA record using its NAT64 prefix and forwards it to the end user. The end user then accesses the NAT64 device that NATs this traffic to the IPv4 server.
Dual Stacks/DS-Lite: An obvious feature to look for is dual-stack support where all IPv4 and IPv6 features can run simultaneously. In addition there is DS-Lite (Dual Lite Stack) which enables incremental IPv6 deployment, providing a single IPv6 network that can serve IPv4 and IPv6 clients. Basically this works using IPv4 (tunneled from customer's gateway) over IPv6 (carrier's network) to a NAT device (carrier's device allowing connection to IPv4 Internet, which can also apply LSN/CGN). Because of IPv4 address exhaustion, Dual Lite Stack was created to enable an ISP to omit the deployment of any IPv4 address to the customer's on-premises equipment, or CPE. Instead, only global IPv6 addresses are provided. (Regular Dual-Stack deploys global addresses for both IPv4 and IPv6.)
I've recently been in conversation with a number of network product vendors - from Cisco to Infoblox - users and test equipment vendors, with respect to what must be the ultimate in "let's sweep it under the carpet and forget about it for a while" IT topics and that is IPv6.
With the last of the public IPv4 address allocation now long gone and the Far East already deploying IPv6 big time, the reality is that we do all need to start thinking about moving from the "4" to the "6", albeit gradually in most cases. And with LTE around the corner in the mobile world, that being pure IP-based, how many new IP addresses will suddenly be demanded? And where are they going to get allocated from?
In the States recently and having a casual natter with Infoblox' Steve Garrison, Steve was saying how many companies still carry out IP Address Management (IPAM) using Excel spreadsheets (got to be in the "Top 10 misuses of a spreadsheet"). So how will they cope with the complexities of deploying IPv6?
Another worry, from a conversation with F5 Networks and others that dabble in L4-7 data "mucking about" is the potential performance hit when moving from IPv4 to IPv6. This is something that (quelle suprise!) vendors don't openly talk about, but F5 has seen up to 50% performance hit on some rival products (tested internally) when moving from IPv4 to IPv6 and generally reckons its own see up to 10% performance loss in the same circumstances. This claim was substantiated in talks with other vendors large and small, such as with a newly acquired load-balancing client of ours, Kemp Technology.
So, on the basis that someone has to do something about it, we are launching an IPv6 performance test program, with a view to developing what is effectively an ongoing buyers guide/approved list for companies to short-list their potential IPv6 related procurements with.
Over the next few days we'll be looking at some of the key elements of IPv6 deployment - think in terms of something akin to the Top 10 Considerations when moving to IPv6. Because, sooner or later, we're all going to have to do it...
So, when a brand spankers new report out from Ericsson (or should I say Sony?) tells us that mobile data traffic will go berserk over the next five years, I think beyond the "well Ericsson would say that wouldn't they" and say it should be paid close attention to. And, besides, I am Eric's son, so we have something in common.
Headlines from the report are:
- Mobile data traffic will grow 10-fold between 2011 and 2016, mainly driven by video.
- Mobile broadband subscriptions grew by 60 percent in one year and are expected to grow from 900 million in 2011 to almost 5 billion in 2016.
- By 2016, users living on less than 1 percent of the Earth's total land area are set to generate around 60 percent of mobile traffic.
So, we're looking at a 10-fold increase in mobile data traffic.That's why I test a lot of optimisation products for a living. And all those vendors, talking in recent years of the era of unlimited bandwidth? Very funny indeed...
And it's not just the mobile networks that will get choked. Consider if just 1% of companies moved their (let's say currently "in-house") IT activities onto the public cloud? Do you think the Internet would cope? Don't think I need to answer that one...
Take airlines for example. I've been doing far too much travelling for my own good over the past few months, including recent flights to LA (more on this in next blog), and they're all - at worst - full, or seriously oversubscribed, and I'm getting the same tale from mates of mine who are NOT in the IT industry.
Similarly, back to IT, vendors ARE spending serious money on acquisitions this year. Just been speaking with Stuart Paterson at SEP (pte equity guys) who recently sold my old client Zeus to Riverbed for $140m (which, trust me, was a GOOD price to make) and he tells me they've brought in £1.5bn from sales this year.
Is it simply a case of companies holding onto cash for a few years and now feel they need (through analyst pressure etc) to spend it? Hello Apple! Or is it that we have a new raft of genuinely interesting technologies that beg to be invested in? Certainly, from my perspective, there are stacks of innovative start-ups around at the moment. In the past few months I've become involved in everything from reinventions of network optimisation (essentially my specialist subject) to enterprise software and even project management being reinvented. The whole "cloud" clamour (and why do people look up into the sky when they talk about cloud computing?) is making the industry buy (panic buy in some cases?) into the technology to enable the very thing that they have created themselves, but didn't necessarily have the resource or expertise to pull off.
I'm currently speaking with the global biz dev guys from many of the top networking and comms companies in the world, because they are ALL on the lookout for new, enabling technologies. And we haven't even got to IPv6 and LTE/4G yet, at least not in Europe. So if anyone out there is interested in adding voice, LAN, WAN, browser-based web app, predictive data transfer (i.e. sending before it's requested) optimisation, completely reinvented enterprise software that is truly social-media aware, new takes on network management, compliance, SAM reporting, project management as a sell-on service and even VPNs reinvented, then please get in touch as we've tested and validated all of the above in recent months.
Talking of IPv6, watch out for an announcement on a new service we're offering at Broadband-Testing, in conjunction with CW and our test equipment partner Spirent, to offer proper IPv6 validation testing - i.e. not some scientific interop stuff, but does it really cut it, performance wise and with real-world traffic mixes, rather than just A talks to B = certification.
And, just for the record, those of you who know our old mate David Tebbutt, should know that he is NOT the poor sod of the same name who was murdered in Kenya yesterday....
We lost a Kewney last year; we can't afford to lose a Tebbutt as well. After all, who would I choose to pick on to start a grape foodfight in a Tex-Mex in Houston, Texas otherwise (a memory tester for those of you involved)?
Wine tip: This month I have been mainly drinking boxed French wine at 8 euros for 5-litres of excellent Minervois by any other name (i.e. its Vin de Pays equivalent). Amazing what you can get for the money just before the new harvest each year...
At the same time, HP is heavily promoting its "single pane management" concept that is IMC - the idea being that you can control (almost) everything on the network from a single interface - nothing new here in concept but then it's never been perfected either. However, the company might be getting closer than most - customers have told them that it manages their Cisco environments (remember this is an HP product) better than Cisco does... well - only one way to prove this and that's to put it to the test; again watch this space on that one too - the test environment will cover both wired and wireless networks, as well as hybrid HP/Cisco and possibly a few jokers thrown in too (where are those 3Com CoreBuilder switches I used once?). Any other suggestions are welcomed...
The point about these observations from the HP event is that one very common theme runs throughout... that CI data centre solution is based around H3C switch technology that came with the 3Com acquisition; the IMC product is pure 3Com origin; the TippingPoint IPS tech also came with 3Com.
So, the primary product strategies of HP networking at the moment are all under-pinned by 3Com technology. When you consider how many billions have been wasted on dodgy acquisitions in the past, including by 3Com of old, the $2.7bn HP paid is starting to look like smart (and smartly used) investment already.
Footnote: I got to the event location (Stamford Bridge, Chelski - Imagine that as a Leeds fan...) on the Sunday, along with my mate Mr MOB who's in charge of MarComms EMEA for HP; so there was a certain high profile footie match on that late afternoon, involving Chelski - logical therefore to watch the match at a local hostelry? The problem is that Chelsea is far too HH (Hooray Hen) an area to show its own football team on TV in the local pubs, so consequently we missed half the match while trailing hopelessly from one near-deserted "Gastro Pub" to the next, in search of an elusive TV screen that was actually switched on....
And to add insult to injury: £9.61 for two pints of bitter and two packets of crisps? Welcome back Wakefield (£1.88 a pint for Clarke's Trad Blonde....
Well - obviously not, but then it's still not as straightforward as it might be. For example, why was it that one US hotel in Bay Area with free Wi-Fi appeared to be sending out my emails, POP3 style, when it turned out that they were neither getting to their recipients, nor being rebounded back to me. I only found out when I got an urgent request to deliver a report and quote for a press release that I'd actually delivered 36 hours earlier (or thought that I had). Then there are hotel networks that actively block POP3 outgoing emails but will support browser-based outgoings; and you never know until you try. So could we please have some consistency in configurations, you WiFi world out there? BTW - tip for anyone using San Francisco International airport - it has an everlasting free WiFi service; basically you get 45 mins for free - then login again and you get another 45 minutes, and so forth and so forth (yes I did check in very early for my flight AND there is very little to do at SFO International terminal).
So - keeping the theme going; back to Europe via Heathrow T5 - where the hell is there a power socket to plug your (now exhausted) laptop or phone into? What happened to IT savvy designers when this was being constructed?
Then, once on the train (yes, this is beginning to sound like a parody of the old Python "Torremolinos" sketch) 3G coverage is still every bit as patchy as it ever was (interesting here that the US has "gotten" ahead again, with 4G coverage already available in a few regions, while we are 2+ years away from the delights of 45mins battery life in your SmartPhone) and tunnels are still a complete no-no. You would have thought that, since tunnels came before modern communications, they might have thought of a solution...
What I'm trying to say is, communications "on the go" is still way off the pace compared with what it could and should be. What is the point in me testing all this thoroughly good network traffic optimisation stuff, if there's no network available to optimise in the first place?
Quick mention here for a mobile service that was launched on the AppStore this week that I've been keeping an eye on via an industry mate - Bababoo.com - automatically chooses the cheapest network for your SmartPhone to use, both domestic/International, from one app without you having to make any decisions - assuming it can find any network coverage of any sort in the first place, that is...
-- Advertisement --