October 2011 Archives

Project management lessons from the chicken and the pig

bridgwatera | No Comments
| More

You may already know this story, but for those who don't I hope this will provide an interesting distraction.

Mr Pig and Mr Chicken were taking a stroll through the farmyard one day and thinking about a life outside of the ranch. Mr Chicken says, "Hey Mr Pig, we should bust this coop and pig pen routine and get ourselves free, then we could live a new life and start a restaurant."

Mr Pig says, "Nice idea, what do you want to call it?"

Mr Chicken replies, "I thought maybe 'Ham 'n' Eggs' would work."

Mr Pig shudders and says, "Well, that sounds like some serious commitment from me, while you would only really just be 'involved' in passing."

Photo credit from ImplementingScrum.com

This 'fable' is sometimes used to explain the differences in roles that exist between team members using the SCRUM Agile management system.

Pigs of course are totally committed to the project and "give their everything" to the cause of the work in hand. In the (human) world of scrum, these are the guys and girls that will produce detailed functional specifications for the project and get heavily involved in testing, project architecture and supporting documentation/annotations etc.

Logically then, the ScrumMaster is quite definitely porcine, swine-based and altogether porky.

The chicken division then (in human terms) is made up of stakeholders, temporary team members, supervisory managers and other individuals who will have smaller roles in the total project.

Although scrum chickens are sometimes referred to as "preening roosters" or "hopeless gamecocks", they are still needed for the scrum project to exist in the first place. Otherwise it's just roast pork and nothing else right?

You can read top tips for using scrum here on Computer Weekly.

Martha Lane Fox backs Coding for Kids programming tuition

bridgwatera | No Comments
| More

Remember Computer Club at school? If you are a 40-something like me, you may remember the Commodore Pet, the BBC Micro and daddy of them all, the Research Machines 380-z with its whopping 56K of RAM costing over £3000 as it did back at the end of the 1970s.

Bringing us (thankfully) bang up to new millennium relevance is Coding for Kids, a new online support and lobbying movement that asks its supporters to pledge to take action in their community to improve the teaching of computer programming for British children.

Coding-for-kids-v3_reasonably_small copy.png

Backed by UK digital champion Martha Lane Fox and others, Coding for Kids aims to actually teach children the "fundamentals of computer programming" in real terms.

This new group aims to answer the issue raised by Google's Eric Schmidt who said this summer that he was, "...flabbergasted to learn that today computer science isn't even taught as standard in UK schools... Your IT curriculum focuses on teaching how to use software, but gives no insight into how it is made."

He makes a good point -- but then he would say that, wouldn't he?

Longtime darling of the UK tech scene and maverick businesswoman in her own right, Lane Fox is currently tasked with spearheading government efforts to get the entire population online by 2012. Reports suggest that she believes improving the teaching of coding is key to our long-term economic success.

"We have to address the fundamental skills in the UK - anyone with an interest in technology and the future of the UK must support Coding for Kids," said Lane Fox, in a press statement.


Anyone can get involved in this scheme through the website link shown above. While this blog is in some small part aiding the effort one hopes, what the organisation would clearly like me (or anyone) to do is to volunteer to start computer clubs at schools, lobby exam boards and start hosting coding workshops.

Supercomputer software, wind power turbines & big data analytics

bridgwatera | No Comments
| More

With the creation and deployment of environmental "renewable energy" technologies now growing at a faster rate than ever, the 'touchpoints' for IT-driven infrastructural support for the eco-energy sector are also growing at a commensurate rate.

While IT firms selling to this space are having to be careful of so-called "greenwashing" (i.e. sticking a green badge on a project but failing to follow through with environmentally aware provisioning), the opportunity to push IT services to what is one of the planet's fastest growing verticals is a major driver for both software and hardware development just now.

As such, Danish energy company Vestas Wind Systems has this week said that it will use IBM big data analytics software and IBM hardware systems to improve wind "turbine placement" for optimal energy output.

Turbine placement is reportedly a major challenge for the renewable energy industry and Vestas is addressing the issue of turbine placement by using IBM BigInsights software and an IBM Firestorm supercomputer to analyse what has been called "petabytes of structured and unstructured data", such as weather reports, tidal phases, geospatial and sensor data, satellite images, deforestation maps as well as weather modelling research to pinpoint installation.


"Vestas turbines operate for decades and clients demand to know how much energy they will produce and what their return on investment will be before they are installed," said Lars Christian Christensen, VP of plant siting and forecasting, Vestas Technology R&D. "Using IBM software and systems, we can now answer these questions quickly to identify new markets for wind energy and help our clients meet aggressive renewable energy goals."

In the siting process, once a turbine is operational, Vestas engineers will use the new software and supercomputer to predict its performance, analyse how each blade reacts to weather changes, and determine the best times to schedule maintenance. The company expects to analyse even more diverse and bigger weather data sets reaching 20-plus petabytes over the next four years.

So now we know that wind farms don't just get "plonked somewhere a bit windy", there is science behind the placement and analysis of these machines and it is software powered.

Even better, next you drive past one you can make a knowing remark about the "petabytes of structured and unstructured data" being processed as a result of its operations.

Who said it wasn't good to be geeky?

Programming possibilities: recognising facial recognition technology

bridgwatera | No Comments
| More

As seemingly impossible as we might have thought it 50 or 60-years ago that a journalist would be using an electronic slab to write this story 33,000 feet in the air, there was also a time when the notion of facial recognition technology was thought beyond the realms of scientific research.

Well that was 1950 perhaps. But this is 2011 and I am sat writing on board Lufthansa flight 2471 to Munich for a developer conference. In this same year of our lord (Brian), there is also news bubbling of new image processing software being used to track our facial expressions via the use of a webcam.

So what use is this?

Suddenly you're surfing the web and an ad for sports cars pops up. You don't drive, so you don't react much. Another car ad pops up, you start to frown. Your browser then sends you a quick apology message and records this user behaviour in the preferences settings that you have approved.

Potential new uses in gaming, education and healthcare come immediately to mind.

The foundations for facial recognition technology date back to the 1970s when American psychologist Paul Ekman developed a coding system that reportedly still used in many places - or is at least used as a foundation.


Software application developers with web, mobile and/or standard desktop application centricity should surely at least pay some passing heed to new work being carried out in this arena. London-based company Realeyes has apparently been using algorithmic analysis of facial data using what are now much higher quality webcams than ever used before to try and progress this electronically powered "emotional analysis".

What's next to follow this? Webcams with heart rate monitors if you believe industry watchers in this zone.

Remember how you no longer think that touchscreen tablet computers are a crazy idea? This could be one of the next "killer app" zones for software programmers to think about.

END NOTE: Dear Lufthansa, I know the "currywurst" is popular in Germany. But hot breakfast chicken korma rolls at 9:35am! What were you thinking?

How loosely coupled applications beat middleware

bridgwatera | No Comments
| More

This is a guest piece written for the Computer Weekly Developer Network blog by Phil Lewis, business consulting director for UKIMEA at enterprise software vendor Infor.

A lot of people in business hate the software that runs their company. Executives grudgingly accept the painful shortcomings of their software because they feel they have few practical alternatives. That pain usually stems from the fact that systems A, B, and C simply refuse to get along.

But isolated applications written in proprietary languages need not be tolerated. For a business to operate at the speed it needs to function, processes have to be quick and comprehensive. This means the software that enables - and controls - those processes, has to be linked together. There is no better demonstration of this than the moment those links break and complex business processes grind to a halt.

A business may be unable to ship product or to invoice customers. Cash flow can be interrupted. Operational reports that steer a business become useless. Users have to move from one application to another to find all the information they need. They can't search for data across corporate software or work on a smartphone away from their desk.

Editor's Note: It should come as no surprise to the reader to at this point find out that Infor specialises in enterprise software from "systems of record" to ERP (enterprise resource planning) systems -- advocating a unified approach to enterprise wide application consolidation and management then is the company's bread and butter.

So what do we do now?

Middleware began as an obvious answer: 'point-to-point' integration via custom-written code that translates from one application to the next. This soon became notoriously difficult to install, time-consuming to implement, and cumbersome to maintain.

Instead, the coupling between applications needs to be loose, without sacrificing security and integrity. This can be based on an Enterprise Service Bus (ESB), which transmits common business language documents, based on OAGIS messaging standards.

Users should be able to search for any data element including, for example, customer name, invoice number, work order, etc. This is currently difficult because most applications have their own data structures and don't enable data to be shared unless a master data warehouse is created -- a big and costly job that should not be necessary with intelligent integration.

Proactive searching capability is the next step. Keywords or data elements can be tracked across all the business applications in a company and email, SMS or even Twitter can alert a user every time the software senses a process or a status change that involves the defined data element.

Integration also enables contextual information. By assessing where the user is - in a process - the system can present business intelligence, content and messaging that's appropriate at that moment.

Intelligent integration is vital for those businesses looking to gain competitive advantage in the current economy. Moving faster and working smarter than the competition is no longer top of the 'nice to have' pile - it is top of the survival list. So in order to do this, software across the business must come together and yield insight, deliver value and drive growth.

¡Ole! Spain drives legality into mobile services with Sybase 365

bridgwatera | No Comments
| More

Mobile-focused software application developers might want to take more than a passing glance at what's going on in Spain just now.

Arguably not always known as the technology epicenter of Europe, the northeast region of Catalonia is Spain's high-tech hub, apparently? OK so there's Mobile World Congress in Barcelona, should that give us a clue?

It should in fact.


Rumour has it that Spain was one of the first countries to start to lay down laws relating to old non-registered pay-as-you-go SIM cards for anti-terrorism reasons i.e. you MUST tell the authorities your name and address and get a new SIM if you had one of the old anonymous ones.

Following on from this "mobile legality" theme, news this week bubbles of Sybase subsidiary company Sybase 365 working with Spanish mobile operator Yoigo. The two firms have joined forces to offer registered SMS, a new service allowing companies to send customers confirmation text messages with the same legal standing as registered mail.

According to Sybase, "Officially certified by the Spanish Real Casa de la Moneda (The Royal Mint of Spain) the Sybase 365 and Yoigo service recognises an SMS confirmation as legal proof of delivery of important documents and information. These certificates can then be used as evidence in judicial proceedings in Spain for enterprises wishing to demonstrate correspondence with their customers. This will enable companies and their customers to resolve disputes in a timely manner, avoiding the cost of court proceedings."

With registered SMS, financial institutions, utility companies and enterprises will be able to use SMS where previously they would have used registered mail.

Developers working to build in legally approved services into mobile (or desktop for that matter) applications should perhaps take note of Sybase 365's suggestion that an SMS provides a number of advantages over registered mail including five times better response rate over traditional mail and is read 288 times faster than email.

"No other communication medium has the ability to reach more people than SMS, said Howard Stevens, senior vice president, global telco and international operations, Sybase 365. "Consumer acceptance and enterprise adoption of the mobile channel is fuelling the growth in volume, availability and sophistication of mobile services and the registered SMS services we're launching confirms this trend."

Why is it called cloud computing?

bridgwatera | No Comments
| More

Why did we settle on the name cloud computing for the delivery of hosted and managed service-based computing? All we needed was a term to describe a cavernous domain with (potentially) limitless expanses of space.

Surely "dungeon computing" would have sufficed -- or would that have sounded too negative?

What about "ocean computing" -- or would that have lacked the alliterative bounce of cloud computing's double C?

We could have had "cosmos computing" too, wouldn't that have been even better?

In truth, the reason for cloud being chosen appears to relate to the work of network engineers who will typically spend many hours diligently mapping network devices, constituents and governing parameters with all the granular detail of a fine cartographer.

When these networks began to connect to "other networks" and indeed the Internet itself, the engineers needed a concept that would enable them to denote the existence of a domain about which not all details were known -- hence the term cloud was hit upon.

It's hard to measure a cloud, computing kind or cumulo nimbus, after all. Isn't it?

big cloud.JPG

My train of thought has been wandering thusly after I was sent a news story by CA regarding its snappily named CA ERwin Data Modeler for Microsoft SQL Azure, a product designed to helps customers manage and integrate database infrastructures inside a Microsoft SQL Azure cloud database environment.

"Many customers have concerns about moving their data to the cloud, fearing the potential complexity and risk of designing and deploying systems in an off-premise environment. CA ERwin Data Modeler for Microsoft SQL Azure enables customers to make fact-based decisions about which data to move to the cloud, and which to keep on premise," said the company, in a press statement.

CA has suggested that working with cloud computing technologies effectively will necessitate controls to provide "visibility" into public, private and hybrid cloud environments.

So this is all about creating an inventory of data assets in the cloud (as well as on-premise data too), quite the opposite of the way the cloud was initially conceived (or at least "perceived") i.e. as an unknown domain.

Now that we are mapping and navigating the cloud computing landscape in this way we are clearly going to need a new name for this technology paradigm. You can send your suggestions for a new name for cloud to "Why I think cloud computing needs a new name and my idea is great competition" as a response to this blog.

Compuware's dynaTrace does APM, UEM and BTM with great TCO, but so what?

bridgwatera | No Comments
| More

Compuware's dynaTrace division is taking its application performance management (APM) wares to market this month with an ebullient skip in its step. Using warm and fuzzy vendor-marketing terms like "unparalleled business value", "shortened time-to-value" and, wait for it... "lower total cost of ownership (TCO)", the company is positively bursting with news of its acronym-heavy PurePath Technology, which is said to unify APM with User Experience Management (UEM) and Business Transaction Management (BTM)

So APM, UEM and BTM with great TCO is all very well, but where's the so what factor?

Compuware insists that as applications become more business critical, managing performance from the "user's perspective" is a growing requirement for any modern APM system.

So hang on, through the muddled haze of acronyms and "technology value propositions", we're starting to talk about tools that help refine performance from a user's point of view right?

"By tightly integrating UEM with its transaction-pure APM system, dynaTrace 4 allows business owners to understand performance from the user's point of view. This includes all (web) page actions, whether these actions call content and resources in the data centre or in the cloud. All user click-paths are captured as visits and aggregated for behavioural analysis and accurate complaint resolution," said the company, in a press statement.

It would of course be lovely if Compuware cut straight to talking about service monitoring with dynaTrace 4 and its ability to identify performance degradation up front.

But perhaps we're just being picky and you have to swallow a slice of TCO messaging before you get to the burger & fries that lies beneath right?

Either way, according to recent Gartner, on average only 8% of the TCO of an application over 10 years is the capital expenditure to initially build developer - 92% of the TCO is enhancing, fixing and operating the application says the analyst house.

John Van Siclen, president of dynaTrace is quoted as saying, "dynaTrace 4 reaches beyond the classic APM boundaries to offer customers more value for the lowest total cost of ownership in the industry. By unifying our 24x7 transaction-pure APM system with UEM and BTM, dynaTrace 4 gives both production operators and business stakeholders powerful new capabilities to make their lives easier and provide greater insight for faster, more accurate decision making."

VMware and the 40-foot ISO shipping container

bridgwatera | No Comments
| More

VMware, Trend Micro and F5 Networks all gathered around one oblong-shaped roundtable late last week to discuss the state of the cloud, the future for the virtualised corporate network and the importance of 40-foot ISO shipping containers.

VMware's chief cloud strategist Joe Baguley escorted the attendees through a series of pithy one-liners that (admittedly) did encapsulate the state of cloud computing technologies and virtualisation with some aplomb.

In comments centred on the new era of corporate virtualised networks where the traditional LAN as we knew it ceases to exist, Baguley said that the public cloud is where we all want to be. "Virtualisation is the 40-foot ISO shipping container that we can wrap around your work," he said.

"If human beings had evolved at the same rate as technology then we would all be dead unless we had all moved to live in shrink-wrapped bubbles," added Baguley.

Note: I did mention that he was full of one-liners right?

Trend Micro's EMEA chief technology officer described this journey towards virtualisation and the cloud as not simply a migration of data, but "an architectural change of magnitude" to be undertaken -- where security should be at front of mind of course.

VMware's Horizon App Manager exists in this space to move applications to a newly virtualised (and secure) aggregation point. But of course with Trend Micro and F5 also present, the security discussion had to permeate deeper still...

Nathan Pearce is virtualisation lead for EMEA at F5 Networks, he detailed his company's position here by saying that, ""F5 is about abstraction of connectivity in the data centre, so that we no longer manage identity based on IP addresses."

This means that F5 is able to "dynamically" apply security policies that will implement controls on data and applications as they move around a virtualised space. This in turn means that the protection stays with the data and apps in real time as they move.

So this dynamic policy control follows from one shipping container (sorry, I mean cloud server!) to the next and then presumably from ship-to-ship if so-called "cloud brokering" is being used to shift cloud resources around to the lowest price point.

"We are able to implement controls in real time for each user dependent on each application for each device it is being accessed upon - and dependent on the parameters that each device can handle," said F5's Pearce.

So as the dockyards and shipyards of ISO container commoditised cloud computing now fill with willing customers, do we know where we are going if VMware (and others) are at the helm? Consider this (if you will) the shipping forecast; the next update follows shortly.

Subscribe to blog feed

About this Archive

This page is an archive of entries from October 2011 listed from newest to oldest.

September 2011 is the previous archive.

November 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.