HP 'reimagines compute' with new server haul

bridgwatera | No Comments
| More

According to Merriam-Webster, COMPUTE is a transitive verb meaning: to determine especially by mathematical means; also : to determine or calculate by means of a computer.

1: to make calculation : reckon
2: to use a computer

According to HP, it is the term we will now use to describe the ethos behind its next generation of ProLiant Gen 9 systems based on Intel's forthcoming Xeon E5v3 processor chips.

"The market for server-based technology has changed and will never be the same again," said Peter Schrady, VP and GM of Rack & Tower Lines for HP Servers Worldwide.

READER NOTE: A more complete story analysis is provided here on Computer Weekly:
HP launches cloud and SDDC-ready ProLiant servers.

Schrady and team were in residence at London's Shangri-La hotel at The Shard this week to explain how mobility and the use of Internet connected devices are driving change back down the technology chain today from front end devices down to servers.

The new HP ProLiant Gen9 portfolio is said to be a milestone in HP's 'compute
Strategy', which seeks to address IT demands with a pool of processing resources that can be located anywhere, scaled to any workload and available at all times.

The servers are optimised for convergence, cloud and software-defined environments.

"HP created the x86 server market 25 years ago, and we have led this market ever since with innovations that have dramatically transformed the datacentre, such as HP Moonshot and HP Apollo. Now, we're setting the stage for the next quarter century with HP ProLiant Gen9 Servers and compute, which combines the best of traditional IT and cloud environments to enable a truly software-defined enterprise," said Antonio Neri, senior vice president and general manager, Servers and Networking, HP.


Image: Glorious close up inside the HP ProLiant DL380 Gen9 taken using Microsoft Windows Phone 8 Nokia Lumia 1020.

HP explains that ProLiant Gen9 Servers span four architectures:

  • blade,
  • rack,
  • tower and
  • scale-out

This (so says HP) provides triple compute capacity and increase efficiency across multiple workloads at a lower total cost of ownership with design optimisation and automation.

Uniquely Gen 9?

HP couldn't stop itself using the word "unique" (Ed - ouch! I thought only snowflakes were unique) at the ProLiant Gen 9 systems launch and pointed to PCI Express workload accelerators and HP DDR4 SmartMemory (that increases compute capacity) as part of the goodies on offer here.

HP SmartCache and FlexFabric adapters also feature here and these have been included to try and provide improved performance as they sit alongside converged management tool offerings which span servers, storage and networking.

The new HP ProLiant Gen9 servers will be available through HP and worldwide channel partners beginning Sept. 8.

The hashtag for those interested in this news is #Gen9 -- but it is worth noting that this is also a popular hashtag used by the Christian community when discussing the merits of Genesis chapter 9.

Is Salesforce1 Community Cloud more friendly?

bridgwatera | No Comments
| More

As we know, companies like HP and IBM now talk about "vertical servers" these days.

These are big old boxes of compute power that have been pre-engineered with the right mix of Input/Output (I/O) technologies, or memory-specific components, or processing power, or other.

They're not exactly vertical as such - okay, they could be very well engineered for particular finance applications... but at the end of the day it's still basically the same box.

Cloud companies like to play this game too.

Salesforce.com sits close to Oracle with its Human Capital Management Cloud (that's HR, or personnel if you're stuck in the 1970s) and so on.

So when Salesforce this week launches its Salesforce1 Community Cloud for customer engagement, should we actually expect anything new?

The firm says that customers can task their software application developers with using this product to create what it calls "trusted destinations" for customers, partners and employees.

Virtual destinations

These virtual destinations will be personalised and mobile like LinkedIn, but connected to core business processes.

"More than 2,000 active communities have gone live since we first offered a communities product just over a year ago," said Nasi Jazayeri, executive vice president of Salesforce1 Community Cloud, salesforce.com.

Jazayeri says that based on the success his firm has seen with customers, salesforce.com is now "doubling down" on communities with its new Community Cloud.

"Any company can benefit from creating an engaged community," said Vanessa Thompson, research director of enterprise collaboration and social solutions, IDC. "Salesforce.com raises awareness of the immense value of community solutions with Salesforce1 Community Cloud by putting business processes at the center of engagement."

The Internet of Customers

This product forms part of what Benioff calls the so-called Internet of Customers and according to IDC, spend on collaboration tools is forecasted to grow to $3.5 billion from 2013-2018, framing the massive market for communities that could exist here and, crucially, the need for software application developers to create products for this still very growing area.

The Community Cloud is connected directly to Salesforce CRM and essential business processes. Now resellers can update leads, employees can create and escalate service cases and customers can review and rate products all from within the community.

Additionally, with new SEO optimisation and unauthenticated access, companies can now attract potential new members through their Internet search engine queries. For example, a musician can discover and join a brand's community based on an Internet search on a specific guitar model.

Internet of Things apocalypse, Now

bridgwatera | No Comments
| More

This is a guest post by Trevor Pott, professor emeritus of full-time nerdyness, systems administration, technology writing and consulting. Based in Edmonton, Alberta, Canada these days, Pott helps Silicon Valley start-ups better understand systems administrators and how to sell to them.

Safe definitions

First off, I think we need to define what mean when we talk about the Internet of Things.


Some people talk of sensors, others about "wearable computing/quantified self" technologies and still others about "home automation." I think that we can safely define "the Internet of Things" as the collection of computers - big and small, from sensors to satellites - that are largely unattended and/or unmanaged.

I think this is an important distinction.

A computer that receives regular management or is regularly used by a human is very likely to receive regular updates, to have its behaviour monitored and for compromises on those systems to be noticed. Unattended computers, however, are the "scut work" technological robots of our society. Largely ignored unless they break or we need something from them, they idle away for years without maintenance.

Here you could put sensors

From Google's Nest to the array of sensors making sure oil pipelines keep working. Baseband Management Controllers (BMCs) that provide lights out management to servers are in this category; they are their own separate computer from the larger unit they serve and yet the BMCs themselves are frequently ignored and left un-updated.

Throw in security cameras, ATMs, even the VoIP phones on your desk or the "public phones" that adorn your local airport and you begin to glimpse the barest fraction of what we're dealing with. There are hundreds of thousands of computers driving displays in cities all around the world. There are computers running - quite literally - planes and trains and automobiles.

A disaster with no realistic end

Wearables, iPods, even the army of computers in our cars are increasingly Internet connected (at least some of the time), and don't get the kind of "patch Tuesday" TLC we afford our primary systems. It's a disaster that has already happened, it will get worse, and I see no realistic end.

Internet of Things apocalypse, now

New standards, APIs, protocols and radio tricks aren't going to make the Internet of Things less of an accelerating security - and privacy - apocalypse. Like any "movement" in computing, the Internet of Things is here, now, today. It is largely a reclassification of that which was already occurring, but has not become enough of an issue - and an opportunity - to earn a cute moniker.

Literally thousands

There are literally tens of thousands of different models of device using thousands of APIs on hundreds of variants of the same 10 or so basic operating systems. Even if we stopped all development of new IoT computer systems tomorrow it would take us the next 50 or so years to find every installed unattended computer on the planet and secure it. And we're adding new computers at a rate that simply cannot be measured.

Future systems need a fundamental change in approach. We need to build our IoT devices with the idea in mind that they are compromised by default. We need to be adding in hard firewalls with application layer gateways and whitelisting the possible commands (and possibly origin points of those commands) that the onboard computers of our IoT equipment will eve process.

We need automated update systems, automated monitoring. We need a means to do all of this and more while still protecting the privacy of individuals and corporations. As scare as the idea of someone turning your 50,000 IPv6 lightbulbs into a botnet that can form a platform launching real attacks against your corporate network is, the privacy implications of having every aspect of our lives monitored is so very much worse.

1984 cometh in 2014

Imagine what insurance companies - or governments - would do if they could track everything you eat, everything you excrete, how much of what exercise you're getting, how much you pay attention when driving, how engaged you are when presented with various images/slogans/policies/pornography/"seditious materials"...you name it. Now consider that the technology to track all of that - and far, far more - not only already exists, much of it is in our homes and we don't even realise.

Smart TVs have already been caught spying on us . Many come with cameras, and the XBox is equipped with not only cameras, but enough sensors to detect if your heart goes pitter pat that little bit faster when presented with blondes, or with redheads.

Start putting it all together, add in the fact that we're all supposed to connect everything to "the cloud", using our online identities, and storing all our information with the IT megaliths from the privacy-averse United States of America and I suspect you'll be able to connect the dots. 20 years ago this would have been the stuff of dystopic science fiction. In fact, 15 years ago it would have been considered the ultimate in tinfoil hat paranoia.

Today, the panopticon is taking shape all around us. The only question that really remains is who will ultimately have access to the data; cyber criminals who only want your money, or corporations and governments who both desire a far more insidious and total level of control.

Early adopters

Nowhere in all of this do I see an out for the average man or woman. What are technologies embraced today only by a few "early adopters" will be mainstream in five years, socially mandatory in 10 and in all likelihood legally requisite in 25. Mark my words, we will look back on such gross social manipulation exercises as "think of the children" or "we need to fight the terrorists" with fondness. The quaint concepts of a more naive time.

We already live in a world where the average person cannot hope to defend their technological footprint against a targeted attack from even a mediocre cyber-criminal. A skilled practitioner of the arts can bowl over the defences of even trained professionals. We are adding millions, eventually billions of devices onto the internet to track our every move and we have just barely begun to think about how we might defend them.

If that isn't bad enough, our future is one in which we will be monitored 24/7, and if we aren't doing "our share" for society we will be penalised. Less tax breaks, higher insurance...who knows where that ends?

What can we do?

Short of refusing to participate altogether, we are facing the true end to privacy within our lifetimes. Not some .com airy-fairy concept that "the evil Google boogyman will see what you like and advertise at you." We're entering a world where anyone - criminal, corporation, government, spouse or more - with the motivation and skills will be able to tell what you are doing, how you're doing it, and to what degree you're enjoying it.

If you think I'm off my meds, remember that we can now use wifi to see through walls.

Imagine what I could do if I could log into an entire house full of wirelessly networked sensors and gizmos, all of which haven't been updated in years? How many things in your house have infrared sensors? Your phone has how many sensors? Do you ever turn your XBox off?

The NSA is watching Ceiling Cat watch you masturbate, and within our lifetimes this will be the new normal. How will we cope with that world? How will our society deal with the idea that we have no secrets?

Companies like Supermicro are starting to invest in technologies to defend the next generation of devices. It's a welcome gesture, but they are one company amongst many millions working on IoT devices. For every Supermicro out there doing yeoman's work on behalf of the little guy, there are 100 others who just don't care.

We cannot stop what is to come.

Human nature - our apathy, our greed, or feeling of collective impotence and need to shift blame - is what stands in the way. We are our own worst enemy and we will bring the panopticon upon ourselves. It won't "get better". We won't suddenly get a handle on technology and slowly reverse a surveillance state that will have proven so politically and financially valuable to so many. It's absurdly naive to even entertain the notion.

Our society will change to accept this as normal. Unlike some, I don't think it will be a grandiose humanising revolution that will cause us to suddenly embrace one another's differences. I think we will fracture, factionalise, become even more polarised and we will feel all the more helpless and out of control besides. We are sleepwalking into an era of voluntary servitude.

Criminals, corporations and our own governments will all have more "visibility" into our lives than our own spouses. And the only good the technologists of today can hope to do is to slow this inevitable future down. If we're particularly lucky, it will be the legacy we leave future generations, but not one we ourselves have to live through.

In the meantime and betweentime, do try to enjoy the benefits of the IoT technologies. They are niche - and will continue to be for some time - but benefits do exist. These benefits are the carrot hiding the rather dark and ominous stick.

-Trevor Pott

Allons-y Kontinentaleuropa technologia!

bridgwatera | No Comments
| More

Should the modern Europhile not be building in conferences, exhibitions and symposia all around the continent this coming autumn (that's fall to our American cousins - Ed) to gain a complete Euro-wide impression of technology?


One would certainly hope so.

The reality is that many conferences seem to put up more barriers to entry than you would think.

A good proportion of CIO-focused analyst-sponsored events seem to have a ban on press attending -- what are they hiding we wonder?

They tell us that the CIOs in question would feel "inhibited" if press are present -- make up your own mind here as to what level of corporate spin and subterfuge is at work.

The analyst firm most guilty of this you ask?

Well, it's not Gartner (as Gartner is in fact very welcoming)... it's a firm with three letters in its name that denote its focus as Worldwide (think of another word) Information (think of another word) Association (think of another word).

I don't C the problem, but it does.

The other challenge for the would-be Europhile is language; amazingly, some of the events staged in France and Germany are presented in French and German.

As preposterous as this sounds, where simultaneous translation (or even the existence of some press and/or other information) doesn't exist, some of these events will be effectively off limits to us as native English speakers (making the wild assumption that you are if you are reading this).

1 Frenchie e e.png

So to pick one from many that does have:

a) open access to press and
b) internationalised materials and information
c) a strong feel for technology in its own domestic market...

... CWDN selects Mobility for Business (subtitled 'beyond mobile') as L'événement des solutions et applications mobiles pour les entreprises on Oct 15 & 16 2014

The event is described as a gathering of 130 exhibitors (manufacturers of terminals and devices, publishers, operators, wholesalers, integrators, and resellers) and 4,000 trade visitors for this the fourth year of the event.

Primarily French to start with, there will be English content at the show -- the title/name is of course offered to us in English to start with.

Should English-only speakers wake up to the need to integrate more (at a language level and also a technology level) with our other European counterparts?

Or should we all just go back to school and be more international?

Liechtenstein, Luxembourg and San Marino for our next tech events everyone?

SAP extreme sailing: big data analysis twice as fast as the wind

bridgwatera | No Comments
| More

Your average technical writer generally needs a really good reason before agreeing to get up at 5 am and take the early train to Cardiff central...

... but it turns out that SAP Extreme Sailing is indeed reason enough.

What is Extreme Sailing?

140319_ESS2014_127 copy.JPG

Let's just be clear, the Extreme Sailing brand (and the reason we have allowed the use of it in full in this headline) is not an SAP brand; sometimes also called "stadium sailing" this is a sailing programme specifically designed for audience enjoyment staged in water where the boats run close to the shore -- and therefore, logically, close to the crowd.

The Extreme Sailing Series operates with eight stopovers around the world... places including Oman, Sydney, China, Turkey, Russia and (perhaps slightly less glamorously) Cardiff.

X2 twice the speed of wind

The boats run (and we will come on to SAP and big data analytics shortly) in an intense environment reaching speeds of 25 knots (in Cardiff at least), which is in fact twice the speed of the wind in the bay.

Really? Sailing faster than the wind? Apologies for a Wikipedia entry, but this appears to be true.

Sailing faster than the wind is the technique by which vehicles that are powered by sails (such as sailboats, iceboats and sand yachts) advance over the surface on which they travel faster than the wind that powers them. Such devices cannot do this when sailing dead downwind using simple square sails that are set perpendicular to the wind, but they can achieve speeds greater than wind speed by setting sails at an angle to the wind and by using the lateral resistance of the surface on which they sail (for example the water or the ice) to maintain a course at some other angle to the wind

The reason these boats "heel over" and run on one of their two hulls is where we start to get to the fun part with the mathematics and algorithms: water is x1000 more dense than air, so sailing with one hull out is faster.

... and this is where sensors and big data comes in.

SAP tags each boat with a GPS so that the race progress can be monitored and displayed in a virtual computer graphic.

There are also sensors to monitor:

  • degree of heeling
  • front to back pitch of the boat
  • and wind information

Yaw (as in the side to side movement of an aircraft) is NOT measured -- as this is boats, yaw is simply taken as the "heading" of the craft.

How SAP BI works on a boat

The data captured from these boats is sent to an SAP cloud service for real time analytics.

"Data analysis here is presented via a dashboard using SAP Business Objects Crystal Reports," said Milan Cerny, Business Intelligence consultant for EMEA BI & big data services at SAP.

"Aggregate statistics from the sailor's activities will show 'patterns' which can ultimately be used by the teams to form their next set of tactics," added Cerny.

In break out sessions at this event, Cerny explained that mathematically the algorithms used here could be extended as SAP performs analysis on the:

1) tacks and jibes
2) bearing away (from the wind) and bearing into it
3) unclassified elements that have yet to be agreed upon

... interestingly, SAP does NOT currently track the sailors individual performance and behaviour using heart rate linked 'wearables' shirts (for example) but the firm's Cerny says that this is coming next.

The post-race automated reports use analytics solutions such as SAP BusinessObjects Explorer software and SAP Crystal Reports as well as the SAP HANA platform to handle the increasing amount of data gathered from sensors on the boats and across the race course.

These reports can be tailored to individual requirements, with the ability to break the data down to supply, for instance, an overview of a year, a specific team, an Act, a day or even a certain race. This is an important development for teams and media in particular as it is enabling them to review and compare specific moments in the Series in an easily consumable format.

"As a team we are committed to continuously improving our performance," commented Jes Gram-Hansen, co-skipper, SAP Extreme Sailing Team.

Can data analytics really help sailors sail better?

One does start to wonder whether data analytics really help sailors sail better -- sailors have, after all, been sailing on the seas, oceans and Cardiff bay for thousands of years.

The SAP crew spoke to journalists during the event to explain that very often the data analytics results often line up with what they thought might the case anyways i.e. in terms of what their human instinct told them -- so it acts as a solid affirmation.

You thought big data analytics was dull?

Try holding on to a catamaran trampoline with one hull in the air and your backside over the ocean and tell me it's dull.

Editorial Disclosure: Adrian Bridgwater works for ISUG-TECH, the wholly and completely independent technical user group dedicated to SAP programming and data management technologies -- SAP met all this journalist's expenses for this trip.

Rackspace DevOps Breakfast: DevOps is a learning process

bridgwatera | No Comments
| More

The great and the good of the cloud computing community gathered at the Rackspace DevOps Breakfast Panel Debate this week in London's glittering Soho district.

Attending this month's "discussion panel breakdown session" were speakers from DevOps Guys, Dataloop, Skelton Thatcher, Eagle Eye and, obviously, Rackspace.

Stephen Thair - ‎co-founder of DevOps Guys said the he believes in DevOps automation as so many of his customers have problems with Continuous Delivery.

Chris Jackson - cloud technologist and head of Rackspace's DevOps Practice Area said that he recognising there is a lot of automation in DevOps and that his company (with its very up front 'Fanatical' support offering) recognises that it now needs to address the intersection of support with automation.

The learning, learnings

The difference between ITIL and DevOps is that ITIL has a huge amount of information to draw upon, but that DevOps could (if it is done properly) is proposing an alternative model that has a perhaps more practical implementation these days with more iterative feedback into the ongoing state of the project....

... and it is this, centrally, that makes DevOps a learning process.

DevOps is a commitment to learning and experimentation (more so than a straight Waterfall development methodology).

Rackspace's Jackson is a huge fan of the CALMS acronym:

  • Culture,
  • Automation,
  • Lean,
  • Measurement or Metrics and,
  • Sharing.

This discussion moved (as might be expected) onward to whether DevOps was a technical issue or a human cultural issue -- despite audience protestations that it must be one or the other, the majority of speakers agreed that DevOps is both a human and a technical issue.

Speakers here suggested that a good route into DevOps (as a new cultural approach) could be to apply it to a smaller application inside the total IT stack and use this as test bed to bring wider DevOps approaches into an organisation -- the challenge here will be finding an application that is "separated enough" from the rest of the IT stack... but it can be done.

"Organisations must be set up to enable software systems to evolve over time -- DevOps enables this. DevOps enables the flow of metrics-based intelligence from production back to development," said Matthew Skelton, co-founder and principal consultant at Skelton Thatcher Consulting Ltd.

"Successful DevOps adoptions address the interaction technology AND teams to build and operate software systems effectively," added Skelton.

Old DevOps is a waste of time

Other suggestions emanating from this event included the suggestion that 'traditional DevOps' (i.e. not delivered as a cloud service) could in fact be a (comparative) waste of time for what are skilled systems administrators...

... what do we mean by waste of time?

If a sysadmin has to spend HOURS of time working to build operational servers, then isn't that a waste of skilled time if that server could be bought from a cloud supplier? The sysadmin could be doing something else more complex, more business-value-add and more live.

Yes, you would expect cloud (an DevOps as a service) vendors to say this kind of thing, but it is arguably quite an interesting proposition.


Windows Phone 8.1 update for developers

bridgwatera | No Comments
| More

Microsoft's recent Windows Phone 8.1 Update for developers includes the UK beta for Cortana.

1 mdwccw.png

For those not in the know, Cortana is Windows Phone's digital personal assistant (think Android Google Now, or Apple Siri, of course) and it is powered by Bing, obviously.

Cortana speaks British, don't ya know?

Cortana has been tailored to support UK spellings and pronunciations and the voice and accent is local. Cortana's personality in the UK has also been tweaked to be more locally relevant.

Microsoft did not make any specific comment on Cortana's ability to understand the Glaswegian accent.

Cortana is accessible through the SEARCH key and offers Bing local UK data on:

• sports teams,
• the London Stock Exchange
• commuter conditions
• instant recipes from the Bing Food and Drink app
• global and local news

Users can now organise applications into folders on the Start screen (like you can with OS X) and Microsoft calls this Live Folders because the live tiles of apps appear in the tile of the folders, which is, arguably, a nice touch.

USER NOTE: To create a Live Folder, users will drag a tile over another tile and then name the folder.

"We made it easier for you to see the latest info about the latest apps and games available in the Windows Phone Store through its Live Tile. If you have the Store pinned to your Start screen on your device, you'll get updates on the newest titles - refreshed every six hours - streamed dynamically to you throughout your day," said Microsoft, in an official update announcement.

USER NOTE: Microsoft has also added the ability to select multiple SMS messages for deletion and forwarding.

... and there's more

With the somewhat over-cutely named Apps Corner, users will be able to specify which apps are displayed in a special "sandboxed" mode (Microsoft describes this as "like a protected Start screen") that restricts which apps are used.

BUSINESS USER NOTE: This feature is supposed to be for businesses so they can allow access to select apps in cases where a full MDM (mobile device management) solution isn't required.

Apps Corner can also be used to boot straight to an application and Microsoft provides an example of where this scenario would come in handy.

Let us imagine employees at a distribution centre using Windows Phone devices that go straight into an inventory app they use to scan products in the warehouse when they turn on their phone. Apps Corner can also be used to setup retail demos. Retailers can export the profile of Apps Corner on one device and import it on to other devices. And developers can get data on usage from inside Apps Corner too.

According to Microsoft, "We've made some improvements in the Windows Phone 8.1 Update to keep your data and identity more protected on public networks. For example, we have added the ability for you to send and receive data through a virtual private network (VPN) when connecting to Wi-Fi hotspots giving you another layer of protection. If you're on your home wireless, creating a VPN provides anonymity to help shield your device from being identified by other devices on the network."


Image credit: GSM Arena

1&1 launches WordPress user/developer community portal

bridgwatera | 1 Comment
| More

Everyone's favourite "we buy more double page advertisements in the technical press than any other company" web hosting company 1&1 Internet has launched a new software application developer cum user community portal.

1 community-icon.png

The portal is at this stage in beta form.

The firm's communications sage Richard Stevenson has suggested that the http://community.1and1.co.uk site will "unite an international community" of WordPress users with developers -- and 1&1 technical staff to, obviously.

Stevenson promises access to experienced technologists who will address a wide range of subjects for all levels of ability.

Everything from general overviews to complex concepts will be addressed in simple language he said.

Background information, How-To's, and tips are available on subjects like themes, plug-ins, SEO, security etc. and organised in a structured way for users ranging from the novice to the fully skilled web developer.

"Each article is marked by topic category and skill level ("Beginner" to "Expert") based on the content. The platform also features website examples created by 1&1 community members. Those browsing the community are invited to provide feedback via a rating system and join in discussions by sharing on social media," said the company, in a press statement.

Greek style blog-hurt

bridgwatera | No Comments
| More

This is an exceptional 'personal' technology blog rarity, normal CWDN content resumes immediately after posting.

It's rare that I ever have cause to spend a 24-hour period offline, but when Mrs B asked to go and see the 'jewel of the Med', a trip to the Aegean paradise of Santorini was inevitable -- but could I keep away from technology for a whole long weekend?

1 views fefwf.jpg

Firstly a word of travel know-how: kayak.com has a really usable location deal finder that lies somewhat hidden from the main page menus.

Click MORE > EXPLORE > and you will find a 'zoomable' map with price tags sitting on destinations that directly link to flight bookings.

This is how we got a flight deal for roughly half of the normal cost, because it's fast and easy to look up.

Next we come to the matter of devices...

I travelled with an iPad mini (for BBC iPlayer downloads), a Nokia 1020 (for phone connection and camera), a Samsung Galaxy (for camera and maps), and finally an HP EliteBook tablet with keyboard and full Microsoft Windows 8.1 Pro (for work).

Yes, I know I could have just brought one device, don't let's go there.

Interestingly, in terms of WiFi connectivity:

  • both phones picked up the flaky Greek WiFi best of all,
  • next was the HP ElitePad tablet,
  • and last in terms of connectivity was the iPad - but Apple won't mind being last.

The HP ElitePad tablet is easy to love, the keyboard option transforms it into a working PC and the keys travel more easily than on the Microsoft Surface Pro, despite being smaller 'chicklet' style buttons. The touchscreen works extremely well (even though I still prefer a mouse) and there is the HP Mobile Connect SIM card option (which I have) meaning I can post this blog while on the train home from Gatwick.

Next we come to technology on the ground in Greece, there's not a lot of it.

I wrote a piece a few years back called a "Norwegian software odyssey" while I was working in Oslo as I was impressed by the general level of computerisation I found while travelling around.

1 driver.jpg

Santorini isn't quite as technologically developed, obviously - and it isn't going to make it up there on the list of Top 10 great European cities with free WiFi, but you can pick it up in most bars if you are prepared to start some pretty heavy drinking whenever you want to log in.

That being said, the Greeks do appear to love their phones as much as any other race --

Our bus driver thought nothing of lighting up a cigarette and then making a mobile phone call, all while driving down the side of an 800 foot ravine.

Using map apps during WiFi gaps

Many of you will know this already, but it's worth mentioning for those that don't. The iPad and Android map applications will keep a journey route in the memory cache and track your progress along it with a GPS locator 'dot' even when you leave WiFi coverage. This helped us lots of times when we were out and about.

So anyway, after five days away from the keyboard I was starting to suffer from too many Gyro-Souvlaki kebabs and develop Greek style blog-hurt... now back to cyberspace-proper and the rest of the world.

1 panaoram.jpg

Are we confusing the Internet of Things with embedded, already?

bridgwatera | 3 Comments
| More

Surveys are the most important, most informative, most insightful and most expressive means of understanding what is going on inside the Information Technology industry -- right?


Well, let's assume that you are reading this because you're not fooled by manufactured un-spontaneous survey contrivance.

So the Internet of Things (IoT) is important and we need lots of surveys to assess its wider worth, correct?

Evans Data thinks so and has questioned 1,400 developers worldwide to find that 17 percent were already working on IoT-related applications... while 23 percent expected to begin projects by next January.

"We're still in the early stages of development for Internet of Things, even though forward-thinking companies like Cisco and IBM have been promoting and enabling development for an interconnected world for the last several years," said Janel Garvin, Evans chief executive.

But are we confusing the Internet of Things with embedded, already?

Evans perambulates loquaciously onward, "The technologies needed are now converging with cloud, big data, system embedded systems, real-time event processing, even cognitive computing combining to change the face of the technological landscape we live in, and developers are leading the way."

There, she said it -- she said "embedded", right there.

In so many places we see that this Internet of Things expression is simply used to convey that which we would normally refer to as embedded development.

Don't be fooled by the IT industry renaming already established conventions simply for the sake of spin...

... and (perhaps most of all) don't be fooled by analyst surveys.

How to reach a software-defined operational state of bliss

bridgwatera | No Comments
| More

Cirba this week issued a statement suggesting that "intelligent control" and management processes are of ultimate importance if we are to be able to build the perfect Software-Defined DataCentre (SDDC) that IT managers currently go to bed dreaming about.

But it would say that though right?

The firm is a cloud-centric software-defined infrastructure control solutions company after all.

Cirba sells automated controls for infrastructure management to help make datacentre infrastructures more software-defined.

How do network programmers use this then?

The firm offers "clever abstractions" to allow common hardware to be used to create special-purpose configurations.

What our software-defined future is NOT

However, the company says that SDDC nirvana is not achieved by simply bolting together:

• virtualisation,
• software-defined networking,
• other cutting-edge and software-defined technologies.

What our software-defined future nirvana IS (or, at least, might be)


It is an "operational state" achieved by eliminating current silos of compute, storage, network and software and adopting a new way of managing and controlling all the moving parts within the infrastructure. With the trend toward software-defined infrastructure comes a new level of complexity that can only (says Cirba) be controlled through sophisticated analytics and purpose-built control software. The ability to make unified, automated decisions that span compute, storage, network and software resources, that are based on the true demands and requirements of the applications, and that are accurate enough to drive automation without fear, is the foundation of the next generation of control of IT infrastructure.

Image credit: B. Dehler

According to a press statement from Cirba, sophisticated control is key to aligning the capabilities of the infrastructure (supply) with the requirements of the applications (demand), which in many ways is the true goal of SDDC, or Software-Defined Infrastructure Control.

Cirba's 4-steps to software defined enlightenment

1. Demand Management - Much of the insight into the needs of applications (CPU and memory allocation requirements, software and compliance requirements, performance levels, storage tiers, workload profiles, etc.) exists in organizations today, but has been traditionally used to procure new hardware. SDIC allows this insight to be leveraged to match those applications to existing infrastructure or to programmatically define what the infrastructure should be, enabling IT to plan ahead and make better use of current infrastructure environments.

2. Capacity Control - Capacity management tooling is woefully inadequate in a world where the infrastructure is programmable and application demand changes on a daily basis. The old 'offline' model of infrastructure resource optimization must be replaced by an 'online' version that is constantly assessing supply vs. demand and making adjustments. SDIC makes it possible to achieve intelligent, automated control over the new decisions that need to be made every day in modern datacenters (where workloads can go, how much resource they should be assigned, and what the infrastructure must look like to deliver this).

3. Policy - At the heart of it all is the operational policy that governs how supply and demand are matched, aligned, and controlled. But if you look around most organizations today, all you will find is simplistic thresholds spread across operational tools, and individual staff who know all the details and subtleties of how the environments operate but have no way to codify them. To control a software-defined environment, or even to make a traditional environment more software-defined, these policies must be captured and used programmatically to plan and operate the environments.

4. Automation - Automating needs to go beyond just the VM provisioning process, but there is a lack of intelligence guiding most automation today. Critical is automating the routing decision of where new VMs should be hosted, locking in the capacity, placing VMs, allocating resources, the ongoing optimization of infrastructure and forecasting future requirements. This requires accurate, detailed models of existing and inbound demands, fine-grained control over supply, and policies that bring them together. The move toward software-defined is invariably coupled to the move to higher level of automation, and SDIC can help make this possible.

SDIC bridges the gap that has opened up in the data center management ecosystem and in many ways is the heart of the SDDC.

Dell Software VP: lightweight app monitoring is, well, just too lightweight

bridgwatera | No Comments
| More

Dell sells software as well as hardware.

Well, honestly -- you knew that anyway... and what IT vendor company doesn't now position itself as a services, cloud, datacentre, applications and software-centric business?

To be clear, Dell of course still sells an awful lot of hardware where some vendors have shrugged off previously more tangible product lines.

Dell Software (the actual company division) has been around in its current form for a handful of year now and came about on the back of somewhere approaching 40 major acquisitions including SonicWall, Quest, KACE Networks and StatSoft to name just four.

Dell Software says its pedigree comes from its position in the app and IT infrastructure monitoring market.

But in the last year a bunch of start-ups focused specifically on creating what might be described as lightweight web based app monitoring tools have been winning in some of what could have been Dell's potential customer base.

New Relic and AppDynamics are the two notable stars in the application monitoring space yet Dell claims that these companies offerings lack the "automated diagnostics and analytics" necessary to speed problem solving.

Dell says it purposefully took a year to build these products and its engineering team spoke with customers worldwide to find out exactly what they wanted in a next generation app monitoring tool.

1 CW dell Steve.jpg

Steve Rosenberg, VP & GM for Dell's performance monitoring division explains the state of Dell Software's position on next generation app monitoring tools:

CWDN: Why is app monitoring so important then?

Rosenberg: When the app is the business, nothing is more important than monitoring its performance to find and fix problems before they negatively impact the business. Application teams, particularly for web & mobile businesses, need to move quickly and have neither the time nor the interest to manage complex monitoring tools.

CWDN: So you will tell us that cloud makes the situation even more pressing then?

Rosenberg: Certainly yes. Supporting cloud applications has become critical to businesses of all sizes. What is needed is an entirely new on-demand app monitoring product that can explore, uncover and fix performance issues in an intuitive way.

CWDN: Tell us what has been happening with these start up "pretenders" to your crown then?

Rosenberg: As a result of this demand, in the last year a host of start-ups focused specifically on creating lightweight, web based app monitoring tools have entered the app and IT infrastructure monitoring market.

Organisations are taking a "big data" mindset approach to all parts of their business and they want solutions that give them all the data they might need to answer performance problem questions quickly rather than be restricted to summarized and averaged data to draw their own conclusions.

CWDN: And your message to software application developers then?

Rosenberg: What developers want is: the ability to record and preserve a catalogue of every transition for historical reference along with the power to dive deep into every transaction, including mobile, browser, OS and app server. They need to be able to see all dimensions of the application sphere (browsers, carriers, EVERY request) and have a tool that captures details about every transaction running through the system to pin-point problem areas surrounding bad performing transactions that are buried inside good performing transactions.

CWDN: Does Dell really understand how tough things are at the developer coal face?

Rosenberg: I think we do. Developers face two main pain-points: proactive problem identification and the ability to dive deep enough to solve problems quickly with minimal ongoing administration. Therefore, developers want actionable analytics and insight at the foundational (transactional repository) level. To get this they need a next generation app monitoring tool which preserves a record of every data point and transaction event throughout the life of the application.

CWDN: Your opinion of these lightweight operators then is that they are, well, pretty lightweight right?

Rosenberg: AppDynamics and New Relic lack the intelligent analytics required to accelerate the performance diagnostic process. What is really needed is a solution that offers simplicity and depth for the developer community.

With a rich history to draw from and the ability to go deeper than ever into underlying performance issues, developers, web and application managers, DevOps teams and IT admins need a solution that offers both simplicity of use and technological depth.

NASA developer challenge protects Earth attack by deep space Asteroids

bridgwatera | No Comments
| More

Bah humbug, not another developer competition surely?

Well yes, but if we said NASA and Asteroids would you read on?

1 hwdiugqw.png

NASA has been working with Appirio's [topcoder] community of 630,000 data scientists, developers and designers to kick off the "Asteroid Tracker Challenge".

The challenge begins July 25 2014 and competitors are tasked to optimise the use of an array of radar dishes when tracking Near Earth Objects (NEO) from the time they become visible over the horizon till the point at which they cease to be visible.

NOTE: NASA define Near Earth Objects as are comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth's neighbourhood.

"Composed mostly of water ice with embedded dust particles, comets originally formed in the cold outer planetary system while most of the rocky asteroids formed in the warmer inner solar system between the orbits of Mars and Jupiter. The scientific interest in comets and asteroids is due largely to their status as the relatively unchanged remnant debris from the solar system formation process some 4.6 billion years ago."
... anyway, back to the challenge

This tracking (the kind developers have to do for the challenge) is meant to allow scientists to gather information from each object such as composition, spin rate, among other properties.

NEO detection and characterisation is a critical need for NASA, it says.

Want to know why?

NASA has been directed to develop capabilities to observe, track and characterised NEOs and other deep space objects that could pose a threat to the Earth.

Developers, it is your duty, please go forth.

Does guaranteed datacentre PUE engender better applications?

bridgwatera | No Comments
| More

While software developers focused on the more cerebral design-centric and user interface level of the application structure might not spend too much time thinking about the architectural back end, there could be a rationale for more inner introspectiveness.

1. mobile relies on back end power to feed the device efficiently, so, ergo, the theory states that a super efficient datacentre will serve apps better than a sloppy one
2. cloud applications mirror mobile in the same sense as point #1
3. our attention and interest towards the datacentre (as the "network is the computer" after all) today is higher overall, primarily perhaps because of point #1 and point #2

So then, shouldn't programmers care more directly about the Power Usage Effectiveness of the datacentre that underpins their applications?


NOTE: PUE s a metric used to determine the energy efficiency of a datacenter -- PUE is determined by dividing the amount of power entering a data enter by the power used to run the computer infrastructure within it.

Ark says it is "pioneering change" (yes, they all say that, but bear with us) in the datacentre industry by GUARANTEEING PUE rather than using it as some barometric business barometer than is endlessly negotiated over as pat of some flaky Service Level Agreement.

Ark CEO Huw Owen says that, "One of our biggest challenges in the datacentre industry is educating businesses and governments about our role in underpinning modern technology and our ability to do so in an efficient and socially responsible manner. That requires ownership and intelligent advocacy by all of us. In this regard, we are not yet where we need to be."

To give some perspective and real numbers, Ark finds that most companies tend to run at a building PUE of 2.5 or higher.

If you could attain a building PUE of 1.25, or less there is potential to achieve savings of around £1.1 million, per megawatt, per year. From an environmental perspective, that's 6000 tonnes of carbon that you could potentially be taxed on.

"Data storage is seen as power hungry which makes our industry a potential political football if we don't share the facts available. The harsh reality is that with the majority of UK data stored in warehouses and shoe-horned into broom cupboards, our critics on one level are right. The good news is that modern, sophisticated, highly efficient, purpose built datacentres offer the solution. That does however have to be broadcast both loudly and effectively, accepted and then acted upon," added Owen.

Bridging the Java to .NET interoperability divide

bridgwatera | No Comments
| More

The following piece is a guest post for the Computer Weekly Developer Network by Wayne Citrin, an interoperability specialist at JNBridge.

Citrin is also CTO and the architect of JNBridgePro -- the company is a provider of interoperability tools to connect Java and .NET frameworks.

Bridging the Java to .NET interoperability divide


With an increasing number of today's enterprises using a mixture of both Java and .NET technologies, interoperability between the two platforms has become an imperative.

The various business reasons behind the need for interoperability include:

  • the reuse of existing skills,
  • technologies and systems,
  • the need to reduce project costs,
  • and the requirement for faster time to market.

While the reasons for interoperability have remained the same, the landscape of approaches has endured some change. Indeed, of the class-level interoperability approaches, only one -- bridging -- has truly prevailed.

Approaches to Java-.NET class-level interoperability

Depending on the business reasons for interoperability and the requirements of the application, a company might choose either a service-oriented architecture (SOA) or class-level integration.

For the purposes of this article, we will focus on class-level integration.

Three basic approaches

Historically, there have been three basic approaches to Java-.NET class-level interoperability:

Porting the platform: Port the entire .NET platform to Java or vice versa. In addition, compile the developed code to the alternate platform.

Cross-compilation: Convert Java or .NET source or binaries to .NET or Java source or binaries

Bridging: Run the .NET code on a .NET Common Language Runtime (CLR), and the Java code on a Java Virtual Machine (JVM) or a Java EE application server. Add a component to manage the communications between them.

Platform porting and cross-compilation certainly have some overlap. But cross-compilation involves only a subset of the code and will usually try to substitute the API calls of one platform to another. Platform porting implies porting all APIs of one platform to the other. Additionally, cross-compilation normally happens once with the result that the Java code is converted to a .NET language (or vice versa). After cross compilation, the initial code base is no longer used.

When evaluating each approach, the following criteria are commonly used:

Performance: How much overhead is involved in inter-platform communication?

Direction of interoperability: Does the approach support Java calling .NET, .NET calling Java, or both? Are callbacks supported?

Binary compatibility: Can we use the approach to access Java binaries from .NET, or do we need source code?

Type compatibility: Does the approach offer full implementation inheritance? Using the approach, are values converted to native data types on the respective platforms, where possible?

Portability: Does the approach work only on Windows, or is it cross-platform?

Conformance to standards: Using the approach, is the behavior of the Java code guaranteed to conform to Java standards?

Ability to evolve: Will the approach break as either the .NET or Java platform evolves?

Both platform porting and cross-compiling -- while still in existence -- have fallen off the interoperability radar to a significant degree, mostly because they've failed to meet all or some of these evaluation requirements. Here, we take a deeper dive into each.

Porting the platform

One software vendor attempted at one time to reimplement the entire .NET platform in Java as a set of Java packages. This meant that the framework became available to be called by any Java code that imported the relevant package. While porting .NET to Java allowed Java to call the .NET APIs, in itself it didn't allow Java classes and .NET classes to call each other. To allow Java to call .NET, cross-compilation must also be used.

1 iugwdiuwge.png

Platform porting offers a number of benefits, including low inter-platform overhead. But its pitfalls far outweigh its benefits, contributing to its near-demise. The .NET framework is quite large, and porting the entire framework is a daunting task. There are a number of namespaces in the framework that are tightly tied to the underlying .NET runtime, and it is not clear that it would be possible to fully implement them in Java. Even if these types of namespaces were available, the Java code loses its portability because it depends on the native Windows code. As .NET is a very large platform with many thousands of classes to port, this opens a Pandora's box of complexity. Platform porting quickly became a very unlikely method to succeed in achieving interoperability because of the sheer size and complexity of the task.


Cross-compilation can either compile Java source to MSIL (the Microsoft Intermediate Language that runs on the .NET CLR), thereby truly making Java a .NET language, or compile Java source to a .NET language such as C# or VB.NET.

If compiling Java source to MSIL, full inheritance between Java and other .NET languages is possible, and if done properly, the Java compiler can be fully integrated with other .NET development tools. Full inheritance is supported between Java and other .NET languages, and there is full interoperability in both directions, so that Java methods can call methods written in other .NET languages, and vice versa.

But with cross-compilation, there exist a number of shortcomings -- not least of which is that the resulting code will only run on Windows, unlike Java source code compiled into Java byte codes, which is cross-platform. Also, any Java code that calls .NET framework APIs is no longer portable, since it relies on calls to methods other than Java methods or Java APIs. Additionally, in order to use a Java-to-MSIL compiler, the Java source code is needed, which means that the option is not available if the user only has Java binaries (for example, a compiled Java library that the user purchased from a third party).

The differences between the Java and C# object models lead to some problems when integrating Java and C# classes. For example, when implementing a Java interface with constant fields, such an interface must be legally compiled into MSIL and used by other MSIL-compiled Java classes, but any C# classes attempting to implement the interface would not see the constants.

When compiling Java source to a .NET language, there is no overhead for inter-platform communication. Full inheritance is possible. In the case of binary cross-compilation, the approach works only when Java binaries are available. When the MSIL is cross-compiled to Java byte codes, the result is cross-platform.

There are some disadvantages to this cross-compilation approach as well. It is necessary either to re-implement the APIs for one platform in the other (that is, re-implement the Java API in .NET or the .NET framework in Java), or to translate one platform's API calls to the equivalent on the other platform. Java byte codes translated to MSIL would only run on Windows. Finally, there is no guarantee that the behavior of Java code translated to MSIL will conform to Java standards.


1 anina.png

Bridging solutions address the conversion issue by avoiding it. .NET classes run on a CLR, Java classes run on a JVM and bridging solutions transparently manages the communications between them. To expose classes from one platform to classes on the other, proxy classes are automatically created that offer access to the underlying real class. Thus, to allow calls from .NET methods to Java methods, proxies are created on the .NET platform that mimic the interfaces of the corresponding Java classes. A .NET class can inherit from a Java class by inheriting from the Java class's proxy, and vice versa.

Bridging has a number of advantages over other interoperability approaches. For example, bridging can evolve as the platforms evolve. Future versions of Java and .NET will work with a bridging solution as long as they remain backward-compatible. As new versions of Java and .NET are introduced, they can be incorporated without having to update the bridging solution.

Bridging has the additional advantage that, since the Java runs on a JVM or a Java EE application server, it is not necessary to have source code; the solution will work when only Java binary is available. Finally, since the Java classes are still compiled to Java byte codes, they remain cross-platform.

Bridging solutions also often conform to standards. Since the actual runtime environment is a CLR or a JVM, and as long as the runtime environments and compilers conform to standards, the resulting code will exhibit conformant behavior.

In addition to these general advantages, bridging solutions support callbacks, allowing Java code to implicitly call .NET code without having to alter the Java code, both pass-by-reference and pass-by-value, automatic mapping of collection objects between native Java and native .NET, and on-the-fly generation of proxies for dynamically generated Java classes.

Rationale and reasoning

There are a number of reasons why one would wish to interoperate Java and .NET code, most of which centre around preserving an investment in Java code or Java developers, and using existing Java code in a new .NET setting.

Each of the various approaches to interoperability, platform porting, Java compilation to MSIL, cross-compilation, and bridging, offers advantages and is appropriate in different situations.

However, as the previous discussion shows, bridging solutions provide the best combination of portability, ability to evolve, conformance to standards and smooth interoperability. These advantages have ensured bridging's survival as the interoperability solution of choice now and into the future.

Institutionalised omnichannel commerce analytics

bridgwatera | No Comments
| More

The arrival of that job title we now call CAO (chief analytics officer) comes with a few other new realities for the 'next-generation' IT shop.

This next-gen IT Nirvana sees analytics now driven from a top-down perspective (i.e. the boardroom and the central IT function) and, also, successfully disseminated throughout every lower echelon and tier of the company (i.e. every employee is armed with an analytics-aware device) so that every workers' data streams are captured for the wider data analytics pool and not left redundant in a silo.

This, in real terms, is institutionalised analytics -- in a good way.


Firms will now combine institutionalised analytics with their ecommerce channels to complete the picture (for now) as they bring cloud-based financials and ERP in to direct the normal throughput of corporate information around their business.

NetSuite is aiming to form a logically structured and even wider virtuous circle here and create what it calls omnichannel commerce as the new norm.

The company recently acquired London-based Venda, a leading provider of ecommerce solutions, to build upon its own NetSuite SuiteCommerce footprint.

"By combining Venda's customer insight and years of experience delivering a real-time, scalable commerce platform with NetSuite's cloud leadership, we can bring new capabilities to B2B and B2C companies of all sizes and transform how they run their businesses," said Zach Nelson, NetSuite CEO.

Venda's Convergent Commerce Platform is an ecommerce platform for retailers and brands to use online, mobile, social and in-store.

This is institutionalised omnichannel commerce analytics... a term we are perhaps not quite used to yet.

NetSuite says it is aiming to enable companies to "re-platform core operational business systems" in the cloud -- and work to support the transformation of those core operational business systems to help organisations transform B2B and B2C commerce to support an omnichannel world.

... and you thought institutionalised was a bad word?

Relax, that's only if you're watching the Shawshank Redmemption.

Will Apple Swift fly higher than Google Go?

bridgwatera | No Comments
| More

Swift is a popular term, name, noun and thing.

Quite apart from SWIFT as a type of Suzuki Jeep, a bird, an alternative metal band from North Carolina and an Australian netball team -- swift crops up in technology circles several times.


OpenStack Swift, (sometimes also known as OpenStack Object Storage) is an object storage system licensed under the Apache 2.0 open source license designed to run on standard server hardware.

Apple's Swift

Swift is ALSO new programming language from the Apple developer team.

Designed for Cocoa (the native API for the OS X operating system) and Cocoa Touch (the user interface framework for Apple's own iOS operating system), Swift enjoys syntax that is "concise yet expressive" (says Apple) and Swift code works side-by-side with Objective-C.

Swift has only been around since June, but it's already ranking well on the
the July Tiobe Index and the PyPL Popularity of Programming Language index.

While these indices are not always regarded as accurate tabulations of real programmer interest, the world developer community has shown particularly close interest in this Apple-originated product.

Tiobe managing director Paul Jansen has pointed out that Google's Go language also ranked highly when first released, but has dropped off considerably since launch.

Apple says that Swift was built to be fast using the high-performance LLVM compiler -- Swift code is transformed into optimised native code, tuned to get the most out of Mac, iPhone, and iPad hardware.

According to Apple, "The syntax and standard library have also been tuned to make the most obvious way to write your code also perform the best. Swift is a successor to the C and Objective-C languages. It includes low-level primitives such as types, flow control, and operators. It also provides object-oriented features such as classes, protocols, and generics, giving Cocoa and Cocoa Touch developers the performance and power they demand."

Your next generation of iPhone and iPad apps will all be written in Swift, eventually.


What is liquid computing?

bridgwatera | 1 Comment
| More

The Computer Weekly Developer Network spots a new industry term in the process of crystallisation this week.

Liquid computing.

As is so often the way with this kind of terminology and nomenclature, this is a user-driven trend rather than a programmer one... or is it?


1 apple.png

If it sticks, it will very arguably have direct implications for the way application developers structure the next iteration and generation of the software they focus on.

Apple's Handoff

The term is liquid computing appears to have been used to describe the process driven by Apple's Handoff feature which will feature in iOS 8 and OS X Yosemite at the end of July 2014.

Apple tells us that now you can start writing an email on your iPhone and pick up where you left off when you sit down at your Mac -- or browse the web on your Mac and continue from the same link on your iPad.

Don't hold your breath

Not to be left out in the cold, Google and Microsoft are also said to be working on features that emulate this kind of functionality -- but we would advise you not to hold your breath waiting for technical details.

Apple explains that it all happens automatically when your devices are signed in to the same iCloud account.

"Use Handoff with favorite apps like Mail, Safari, Pages, Numbers, Keynote, Maps, Messages, Reminders, Calendar, and Contacts. And developers can build Handoff into their apps now, too," said Apple.

Apple showed of this feature off at its recent WWDC conference's public keynote.

The term liquid computing was used originally by Galen Gruman and we like it.

DevOops! I did it again

bridgwatera | No Comments
| More

It was just a minor mistyping, but when DevOops cropped up this week it was more than the Computer Weekly Developer Network could handle... it just had to be blogged.

If DevOps = Developer Operations then...

LOBDOPS = Line Of Business Developer Operations and so...

DevOops = That State Of Being When DevOps Is Discussed Too Much

That being said, recent blogs in this channel have covered DevOps perhaps more than ever before and now we're convinced that DevOps is a cultural practice and methodology to deliver products and services, not a thing.

Additional comment came in this week from TK Keanini (Ed - don't be shy, tell us your first name) who is CTO at Lancope,

Keanini (presumably TK to his friends) says that for those who live and breathe DevOps, it is a necessity driven by the needs of scale for Internet applications.

He writes below as follows:

"DevOps is not for everyone but for those who require it, it is a necessary part of the business. To understand this better, just go to a DevOps organisation and ask them why they cannot operate in the historical service models. What you will see is that the speed at which the business functions is lighting fast when compared to the non-DevOps methods and this tempo is necessary, not optional. The people, processes, and technologies all need to work in concert for DevOps to really take foot but when it does, a structure much more tolerant to the scale and hostility of the Internet emerges. This is the value of DevOps."

Lancope, Inc. is a provider of network visibility and security intelligence -- by collecting and analysing NetFlow, IPFIX and other types of flow data, Lancope's StealthWatch System claims to be able to detect attacks from APTs and DDoS to zero-day malware and insider threats.

A real Internet of Things smart home experience

bridgwatera | No Comments
| More

Five years on from now, this story will sound ridiculous.

The rise of so-called 'smart home' technology and the plethora of devices emerging into the so-called Internet of Things (IoT) category is, of course, very rapid at the moment.

The Computer Weekly Developer Network blog has (for some months now) been talking about how software application developers are now given the opportunity to program not just for the enterprise, but also for the toaster, fridge and microwave as we start to connect these domestic appliances to the communication protocols and the web so that we can start to manage them and digitise our lives.


But when will the digital home become a reality?

There are some higher profile examples than the one you will read about below, but I have finally started to implement smart home technologies in my own house.

Let's start with the basics, a Fitbit.

Firmly in the so-called 'wearables' category, many users (myself included) can't put their trousers on without dropping their Fitbit into their pocket.

Since I started carrying a Fitbit One I have clocked up 1000 miles in the last 8 months and I know that my climbing peak was 200 flights of stairs in one day?

Does that sound silly?

It's now a part of my life and I already have a spare one for when this one wears out.

Like I said, five years from now, this story will sound ridiculous -- everyone will be used to wearables and have one or two devices of their own.

Moving on... I am ashamed to tell you that my toaster still works by clockwork and my microwave is ever so conventional... but, my home heating and hot water system is much more exciting.

British Gas Hive

As a proud owner of a British Gas Hive system I am able to turn my heating on and off from the app on my iPad and Android smartphone -- the absence of an app on the Windows Phone store is a shame.

I am also able to see what the temperature is at home and adjust my heat controls and the complete schedule of when my heating (and hot water) comes on and off from the app itself.

1 oiehwdw.png

I have been in meetings over the last couple of months and opened up the app just to show people (usually quite techie people) that I can turn my hot water on when I'm not at home.

Everyone thinks this is so cool today, in 2014 -- and it is of course, but... as I keep saying, it won't be long before we accept these technologies as the norm and therefore start actively demanding them as consumers.

Like I said, five years from now, this story will sound ridiculous.

The fact that I can now be away in a foreign country in winter time and not have to come home to a cold house is... well, it's life changing if I am completely honest with you.

Hive is controlled from a hub that plugs into your broadband router so that your thermostat can connect to the Internet and be controlled remotely.

British Gas says that while we have seen innovation in transport, retail and leisure, the pace of innovation in many aspects of our home has been pretty slow.

Outside of the living room, technology hasn't really changed how we manage our homes. The way we heat, power and light our homes has not changed for decades. The last mainstream innovation in our home could be described as the mass adoption of central heating in the 1970s.

Until now then right?

So will my heating switch off if my broadband goes down?

No, the heating will continue to work, it will just need to be controlled directly from the thermostat itself.

I would argue that one of the best things about British Gas is its engineering staff and our unit was fitted by one Charlie Cole (no relation to Cheryl) who explained the system clearly and slowly.

So what else have we been connecting?


Now for the best bit, I also have a Piper.

What's a Piper?

The Piper is a home surveillance, security, home management, alert system -- or something like that.

It's makers call it a home automation system.

Piper is the first device to combine panoramic video, so-termed 'Z-Wave' home automation, and environmental sensors into a single product.

There are zero service contracts or monthly fees.

"The ability to simply and easily interact with and secure your entire home -- not just one room -- when you're away has been a priority of ours since we first developed Piper," said Russell Ure, creator of Piper and executive VP & GM of Icontrol's Canadian business unit.

Using up to five Pipers, users can create independent security zones within their homes.

Each Piper operates as an "independent sentinel" and joins together to form an integrated security network.

Z-Wave integration allows (say its makers) for complete home awareness and automation control.

Shared environmental and motion sensor data, camera views and recorded videos provide the ability to track changes and movement through each zone.

Piper features include a two-way audio -- this means I can talk directly to occupants through Piper and app on mobile device. Users also have the ability to customise three security modes (home, away and vacation) and program a motion detector which connects to a piercing 105-decibel siren.

There's also free cloud storage that provides Piper with a place to store event videos, send various types of notifications and perform additional login/connection negotiation.

Piper's HD Panoramic camera has a 180° fisheye lens, electronic pan, tilt and zoom and 1080p camera sensor.

The fact that I can now sit in a meeting and show people a live video stream of my home that I can talk into and interact with as other house occupants pass by is amazing, today, in 2014.

The fact that I can use Piper and Hive to see what the temperature is in my own house (Piper has a thermometer too) and turn my heating on and say hello to my dog as I watch him scamper around and wait for me at my front door is amazing, today, in 2014.

But I'm telling you, five years from now, this story will sound ridiculous.

1 dog.png

Subscribe to blog feed

Find recent content on the main index or look in the archives to find all content.