May 2012 Archives

Could API exposure destabilise our networks?

bridgwatera | No Comments
| More

There's a tussle going on in telecoms.

Telecoms operators (the companies, not the old ladies with the plugs and switches) are working hard to differentiate their services and rate-plans and at the same time help nurture the development of "exciting new applications" for us all.

But here's the rub...

In order to do this, operators must allow developers to connect to the network and access the information they need to allow them to create apps.

One might argue that the breadth and diversity of Apple's AppStore and the Android Market reveals the potential of harnessing an open, independent developer community. Operators (it is argued) can utilise this so-called "vast pool of talent" to help ramp up the speed of innovation to broaden their portfolio of services.

It's not that easy -- here's the fear factor...

Jonathan Bell, VP of product marketing at telecoms software company OpenCloud warns that there is a risk involved here i.e. without a form of standardisation, or agreed protocols, third party applications could inadvertently destabilise systems within the network.

open cloud.png

"The alternative for operators is to expose network APIs. These provide a distinct interface that can allow operators to control developer interactions with their core network. API exposure can enable a variety of different business models for operators and allow developers to have the ability to create exciting, and margin-improving services, such as branded app stores and web mash-ups, built on the operators' APIs," said Bell.

But even exposing network APIs must also be done with care says Bell. Last year T-Mobile USA discovered this after its network performance was impacted following the release of an Android-based IM app that reconnected with the network so often, it caused network signaling overload in certain densely populated areas.

NOTE: Fraudulent apps that dial or text premium numbers, without the end-user being aware, have also been launched onto operator networks.

Bell now asserts that in exposing network APIs, operators must strike the right balance between allowing app and service innovation and keeping their network secure and fully functional.

"Through the deployment of a flexible, open standards-based, service layer framework operators can safely expose network APIs to third party development," added OpenCloud's Bell.

Operators should look at supporting independent innovation by a hierarchy of groups:

• At one extreme, the extensive global developer community;
• at the other groups of in-house developers;
• and groups of trusted third party developers in between.
• Each group should be supported with a different balance of exposed capability versus risk.

"Once access to network assets has been granted, developers can enjoy more freedom to innovate and support the operators' service line-up. Offerings can be developed based on the protocols and connectivity provided by the service layer framework, cutting the cost of integration and speeding up the time to market," added OpenCloud's Bell.

What is IT operations management?

bridgwatera | 1 Comment
| More

Gartner has released news this week suggesting that the IT operations is on the up, saying that worldwide IT operations management (ITOM) software revenue totaled US$18.3 billion in 2011, an increase of 8.7 per cent from 2010.

What is IT operations management - ITOM?

CWDN defines the term as follows: IT operations management (or more commonly, just "operations") is generally agreed to encompass the day-to-day tasks related to the management of technology infrastructure components and the more granular needs of individual applications, services, storage, networking and connectivity elements of a total IT stack in any given deployment scenario.

"The market showed growth for the second consecutive year, after a sharp decline in 2009, despite slow economic growth, tight IT budgets, and merger and acquisition activity," said Laurie Wurster, research director at Gartner. "We saw consistent resilience in 2011, with the ITOM software market expanding both in terms of revenue and worldwide markets."

In 2011 the top five vendors continued to dominate the ITOM software market and accounted for 53.5 per cent of its revenue (see Table 1).

Gartner table.png

Table 1
ITOM Software Vendors, Total Software Revenue, Worldwide, 2010-2011 (Millions of Dollars)

Digging deeper...

I asked Gartner's Laurie Wurster if OPERATIONS is growing by virtue and as a direct result of the wider growth of software DEVELOPMENT or in concert with it or despite it or as a natural sister discipline etc...

Assuming that one can not grow without the other right?

Wurster said that one of the main drivers of ITOM spending is "cost containment" or in other words better utilisation of existing assets. The fastest-growing segments include tools for improved automation, management and performance so the growth of this market has less to do with development growth than it does with these other factors.

"Maturity of this market is also a key driver, while the market is moving in the right direction, more maturity and industrialisation on both the vendor and IT organisation sides are needed to achieve promised value, however economic conditions over the past 3+ years has 'enlightened' organisations to the importance of improved management of the data centre," she added.

From punch cards to speech, our input method methodologies

bridgwatera | 1 Comment
| More

Where we once considered punch cards to be at the cutting edge of human computer interaction, the years passed and industry innovations brought us forward to a point where speech recognition technology has advanced to a point at which I am writing this blog without touching my keyboard.

Note: letters not forget the intermediary years in between punchcards and speech recognition, where we were perfectly happy with keyboards, mice and touchpads of various kinds.

After visiting Nuance in Boston last week and interviewing several of the company's executives on the subject of speech recognition, it seems only fair to put Dragon NaturallySpeaking through its paces and speak this blog straight into a Word document to be later posted online.

Dragon is really pretty powerful now and although a few niggles will crop up in any spoken paragraph, I think that with a little training (of both the computer and myself the user) I could become far more used to using this input method - although I will have to get used to thinking as I speak rather than thinking as I type, which is not necessarily as easy as it sounds.

To extend my analysis of Nuance and its work with natural language understanding I also spoke to another vendor and so connected with Dr. Ahmed Bouzid, who is Angel's senior director of product and strategy.

Note: Angel is a subsidiary of MicroStrategy and exists as provider of on-demand customer engagement solutions.

I asked Dr. Ahmed how the speech recognition software application developer community differs now compared to five years ago -
Is it easier to recruit now the talent pool is richer?

Dr. Ahmed Bouzid -- Demand for speech scientists today far outstrips supply. The advent of Siri has been a galvanising event that has awakened the world to the possibilities of highly usable speech/voice user interfaces on the smart device. Evidence is the emergence of a whole new crop of Voice Assistants such as Evi, Cluzee, Eva, Ask Ziggy, and a couple of dozen more on all three of the three Mobile OS platforms (Apple, Android, and Micosoft). Available speech scientists (or software developers) today are indeed very difficult to find. I would say that the vast majority of them are taken up, not surprisingly, by Apple, Google, Microsoft, but also by AT&T and Nuance.

Who is wining in the speech vs touch battle?

Dr. Ahmed Bouzid -- I think it is a mistake to view speech and touch as mutually competing interfaces. Speech is a highly compelling interface, but only in the right circumstances. You don't want to use speech in a noisy place, or in a setting where you are not able to engage your device privately (e.g., financial transaction). On the other hand, when you are driving, you do not want to take your eyes away from the road -- or your hands off your wheel. For that setting, speech is ideal. So, I would say that the winner is going to be whoever is able to understand that value is not inherent in any given interface but rather in how that interface is introduced in the user's interaction stream. I would venture today that today, Siri is not there yet, nor are any of the Assistant Apps out there today. None of the Speech/Voice-enabled assistants today combine speech and touch in a compelling way that empowers the user to do what they want, they way they want it, and when they want it.

Will speech data now become part of the big data mountain in the cloud?

Dr. Ahmed Bouzid -- Yes, indeed. The recorded voice is a highly rich collection of data points, and at least for now, such data is transferred over the network for processing. Since the arrival of Siri, Apple has collected billions of audio snippets from actual people asking actual questions (serious as well as silly). Google has done the same and in fact has developed a highly accurate speech recognition engine as a result of the audio data that it has collected over the last few years. Microsoft also runs its speech engine in the cloud and similarly has a treasure trove of audio data. Such data will not only enable these companies to continue refining their speech engines, but may also push these engines to be resilient enough to be highly accurate in data mining other audio (e.g., podcasts).


Note: we still have some way to go with speech recognition, but the advancements that have been made make the technology really quite impressive and fascinating (I'm still talking not typing). We will still get problems with Homer names, homonyms -- that's better, when words do sound the same... As that live mistake just shows. But this has to be part of the way we start to use computing devices more in the future wouldn't you agree?

Admin gets sexy: ERP without Agile is worthless

bridgwatera | No Comments
| More

Enterprise Resource Planning (ERP) and its allied functions across the administrative backbone are of course the most maverick, exciting and progressive elements of any modern organisation.

ERP-related job functions such as IT asset and service management along with supply chain logistics are usually right up there with International Fire-Fighter and Stuntman on most young males "ideal career list" as they pass through adolescence.

I jest of course...

But there's a point to be made here: IT asset management is becoming increasingly recognised as a niche (or at least "distinguishable") technology skill and really progressive companies in this space are (arguably) using front-line software application development practices to ensure that their products are as precision-engineered as an F1 carburetor at 16,000 revs.

Swedish ERP specialist IFS says it has launched the new generation of its extended ERP suite, IFS Applications, entirely based on Agile development methodologies.

By using Agile methodology, customer feedback is incorporated into the system throughout the development process says the company.

"By relying entirely on an agile development methodology, IFS has reduced work in progress by 60 percent, increased its deliveries by a factor of 10 and achieved a 30 percent enhancement in quality. IFS's Agile development methodology is 80 percent based on SCRUM methodology and the iterations typically run for two to four weeks. Toward the end of each iteration, the solution is presented to an IFS customer or customer representative, who evaluates the functionality and provides feedback on possible improvements. By focusing on the expressed need of the customer, the right solution is developed in the right way, at the right time, thereby ensuring on-time delivery, low costs and high quality," said the company, in a press statement.

All good then? Well -- IFS and ERP plus ITAM and ITSM are all very well, but perhaps this story needs its own mini glossary:

ERP (Enterprise Resource Planning)
EAM (Enterprise Asset Management)
ITAM (IT Asset Management)
ITSM (IT Service Management)
SCM (Supply Chain Management)
... and even, MRO (Maintenance Repair and Overhaul).


Buggy legacy software is an 'ugly elephant'

bridgwatera | No Comments
| More

Co-founder and CTO of Veracode Inc. Chris Wysopal has recently been interviewed on the subject of how difficult is it to address legacy software in an organisation.

While Wysopal's comments make great reading, one has to question whether legacy software is always a "challenge" and therefore a problem. After all, there is a school of thought which says that older software can be a good thing -- as below:

Legacy software is not bad software; legacy software is software that still works!

But specifically, Veracode's Wysopal talks about the need to "make good" legacy applications from a security perspective. This is because when it comes to the work of programmers, who must are faced with two options:

1. Retrofit and reengineer older legacy code to sharpen and precision engineer it to new security standards taking into account current malware and penetration considerations while also refreshing the programming language in use in some cases.

2. Write some secure code on new code.

While option #2 might sound like a good idea, it does arguably leave these older legacy applications in a state of bedraggled discombobulating slackness, so that they no longer represent the sharpest tools around.

"It's so much easier to write secure code on new code than to go back and retrofit old code," says Wysopal.

"The development team is gone, there are no resources and it's just built with older languages, frankly in a fairly ugly way. To me that's the big elephant in the room for application security. We just can't ignore all the applications that have been built prior to today. Some of these applications will last another decade, so they need to be secured at some point. That's a real challenge."

Wysopal goes on to talk about the need for every developer to skill up and embrace the need to build in robust application security as part of every project undertaken as we currently suffer under what he calls a "breadth gap" in terms of both awareness and skills.

Time to go elephant hunting?

Inventor of the Wiki: what technical debt really means

bridgwatera | No Comments
| More

Ward Cunningham is famously the inventor of the first wiki and also (perhaps less famously) the inventor of Class Responsibility Collaboration (CRC) cards -- a brainstorming tool used in the design of object-oriented software.

In a recent interview this month, Cunnigham has expressed his views on the concept of "technical debt" i.e. the idea that programmers coding today leave a forward-legacy for future programmers to re-pay in terms of maintenance if their code is not quite as robust or polished as it could be.

Or at least that is the positioning for technical debt that vendors who sell "code quality" analysis software describe this concept with.

Whereas Cunningham is rather more even handed with his definition, saying that the concept here is an arrangement where some programmers write code, while others maintain it.

Going further, we might look at the idea of this not perhaps being a "fix" for broken code, but a refactoring (sometimes called consolidation) of existing code...

"In other words, you figure out what it should have been, and you make it that. Whereas the prevailing wisdom at the time was, "If it's not broke, don't fix it." So the first time you got something working, you quit. That's not a way to make great code," said Cunningham.

Spin and subterfuge...

It's nice to be able to read a purist programmer's account of what technical debt might mean in the wider scope of how software application development projects are constructed without perhaps the scaremongering of "code quality" vendors trying to hijack the term for their own contrived spin and subterfuge.

"Whether you call it debt or not, but quantifying the intellectual position that you are in at a particular time in the lifetime of a body of software, is useful," says Cunnigham.

This is fascinating stuff indeed, so don't read the next technical debt piece you see with a negative shroud leaning towards interest in the remedial software tools and code analysis functions being sold to you as a side order; instead think about the fact that some technical debt is the necessary growth payment needed to augment and enhance a product forward over time.

Nuance pushes speech recognition towards full "Star Trek"-ness

bridgwatera | No Comments
| More

Humans have been "talking" to computers since Star Trek and 2010: A Space Odyssey through the 1960s and beyond, so one might reasonably argue that we have Hollywood to thank for our perception of where speech recognition technology should be today.

We can't quite record a piece of speech and just process it through speech recognition technology to output a full (completely accurate) transcript right now, but we're not far off suggests Peter Mahoney, senior VP and chief marketing officer of Boston-based Nuance.

In a series of press briefings held this week, the maker of the 'Dragon NaturallySpeaking' speech recognition product has sought to explain where we are with this technology today and detail just how close we are to a potential "paradigm" shift for developers and users alike.

"We've made huge advances, but we still have a few challenges before we can reach full 'Star Trek'-ness," said Mahoney as he explained how challenging speech recognition can be when it comes down to handling so-termed "homonyms" across the 51 languages that his company's technology supports.

NOTE: Wikipedia's "homonyms" definition is short & succinct: "In linguistics, a homonym is one of a group of words that share the same spelling and the same pronunciation but have different meanings." -- an example would be there, their and they're for example.

So are we at a tipping point for voice?

• Algorithmic advances have pushed real speech recognition forward by a quantum leap.
• NLU (natural language understanding) innovations have progressed significantly.
• Mobile device proliferation (think Apple Siri) has led to more users potentially using speech recognition (Nuance has a working relationship with Apple, but won't say any more).
• Computational advances in terms of processing power, parallelism and multi-core have also helped.
• Mixed modality designs -- a combination of images and voice makes it easier for users to interact with voice-based systems.

Nuance CW.jpg

Beyond "robot-speak"

So does speech recognition really work now? Many of us will remember having used it
over the last decade and having to resort to 'robot speech' in order to get any of this technology to actually understand what we are saying.

Nuance's Mahoney says that his company has delivered mobile speech in more than 500 million devices to date and that amounts to the automation of more than 10 billion caller interactions each year.

"The human voice is an incredibly rich, natural and efficient means of communication. Nuance builds solutions that enable computers, phones, tablets, automobiles, TVs and consumer electronics to understand the human voice, providing a natural interface between man and machine," he said.

Nuance talks extensively about the "nuances" (yes, I know!) of engineering individual algorithmic engines to accommodate for different languages and different accents, but it gets even more complicated than whether you happen to have a Mancunian or Liverpudlian lilt.

Imagine if you are a British Indian from a Punjabi background but living in Glasgow. The intonations make the brain spin. If you happened to have a cleft palate or some kind of speech impediment... or even learning difficulties, the software has even more hurdles to cope with.

"There's a lot of work to be done connecting to the different data domains in each region," explains Mahoney. OK so the company started by focusing on US English and connecting to Facebook -- but as it now looks to China's Facebook equivalents and other social networks and services in the region, a new challenge presents itself.

So where is this technology used today?

Consumers are typically a bit more "forgiving" with regard to speech technology. But in fields like healthcare we find that application-specific use cases demand for extremely high levels of accuracy.

Nuance says that speech will now change the role of the medical transcriptionist from that of a writer, to that of an editor. Medical is a huge field for speech and software application developers have shown particular interest in supporting hospital and healthcare staff who are notoriously short of time and notoriously poor at 'written admin and reporting'.

The company also will use its sister imaging solutions business to combine with its speech knowledge and work with Multi-Function Printers (MFP) manufacturers to offer speech as an interface to command and control some core MPF functions, at the device.

... and of course Nuance has "cracked the accuracy" (its words not mind) of the Dragon desktop software offering.

The company says that it has slashed user "enrolment times" (the amount of time it takes to install and train the software), improved experiences by offering noise-cancelling microphones and generally improved the total engineering mechanics of its product.

Equally, text to speech is important here too. The Kindle uses Nuance's code to "reads" books aloud.

So have we reached the Star Trek point yet --- just how close are we today?

Mahoney suggests that we could be close to full "Star Trek"-ness (or to give it its proper name - "robust natural language" capability) in six to ten languages by end of this year.

Trend Micro's Android infection realities... and other stories

bridgwatera | No Comments
| More

The Computer Weekly Developer Network blog ran a piece recently entitled "What can software application developers expect from InfoSec?" -- which, if anything was an open invitation for additional commentary on this subject.

There was (arguably) a fair slice of vendor stuff and nonsense flying around at the InfoSec show itself with free T-shirts, jelly beans and so-called 'booth babes' slathered around like they were all going out of fashion. So as a result, we find some of the better commentary coming out in the wash a short while after.

Raimund Genes is chief technology officer with Trend Micro and he his team have answered some of the unanswered points and direct questions posed in the initial story we ran on the subject of developer issues at the security coalface.

We asked...

At what stage should software application development projects identify and classify their security/encryption/protection quotient and set out a concrete IT asset management "place at the table" for this element?

Raimund Genes -- "As soon as possible -- applications need to be designed with security in mind. A lot of attacks are due to application vulnerabilities, not just OS vulnerabilities anymore. Microsoft really has stepped up the OS patching, but how often do they patch the applications? We see a lot of targeted attacks using known vulnerabilities, for example in Adobe PDF and Flash. What is worrying is the fact that especially with mobile applications, the turnaround is very fast. This, combined with an open Application Market is the perfect storm - look at Android. At Trend Micro, we predict that 130,000 devices will be affected by Android malware by the end of this year."

Trend Micro Mon 14.jpg

As security is "architected in" to a software development project, how does the responsibility for its ownership transition from software architect to developer to IT asset manager and onwards?

Raimund Genes -- "It does not only need to be architected into the software development project, it needs to be architected into the complete lifecycle/ecosystem of a software development process. During development, things like how patches will be deployed, how the software could "self defend" with watchdog modules, integrity checks etc., all need to be defined. The IT asset managers, the buyers, should start to demand safe software -- this would make all of our lives easier! Or they should promote closed ecosystems - Apple is a great example. The individual components are not more or less secure than other vendors, but combining it all together into an ecosystem - where the vendor delivers the hardware, software and app store - means really tight control!"

How can open source community contribution model engagement help aggregate malware risk awareness and how can that be engineered into software products in production and postproduction?

Raimund Genes -- "Open source is great, as many people look at the source code. Security by obscurity never worked! And with open source software you could alter it, recompile it, so it is not a monoculture anymore - which is good news short-term. However, it can cause issues with patching and updates - see Android again. Properly engineered into software, open source enables code review by multiple developers, constant checks moving forward, an architect team with veto rights... and so on."

How should software developers be "tutored" into security awareness at all levels? For example, if developer A is a user experience GUI specialist and developer B is a graphics rendering guru, then neither probably stop and think about security too much -- but as all data represents risk and all risk is the concern of security, how should the "security mandate" be proliferated throughout all stakeholders in the software application development lifecycle?

Raimund Genes -- "They all need to be trained on security one-on-one, and how to use safe coding practices and watchdogs like Canary Value, for example. There are safe coding practices out there, but laziness, the "I don't care" mentality, and timeline pressures, combine to kill the security-mindedness of a lot of projects. Developers need to understand that their work lives on. If they don't get the security right now, they will need to deal with security breaches afterwards, and they will be producing patches to fix vulnerabilities late into the night! We need to give them more time for better coding, and we need to promote this attitude. Security should be the first consideration, not the last. Customers need to understand that paying for better coding will ultimately cause them fewer issues, but we need a change in mindset for this."

Technical blogging advice for software developers

bridgwatera | No Comments
| More

Software programmers spend most of their time programming. Surely then, the programmer community should be an excellent source of advice for those who want to grasp the key issues of the day pertaining to any particular language, methodology, platform or device.

Sadly, not everyone feels comfortable writing "in public" (so-to-speak) on the web and many restrict their postulation and theorising to forums and newsgroups -- if, indeed, they say anything openly at all.

Antonio Cangiano wants to buck this trend and encourage programmers to start blogging. A software developer and technical evangelist for IBM himself, Cangiano's logically titled book 'Technical Blogging' is intended to reach out to programmers, technical people and technically-oriented entrepreneurs to teach them how to become successful bloggers.

"There is no magic to successful blogging; with this book you'll learn the techniques to attract and keep a large audience of loyal, regular readers and leverage this popularity to achieve your goals," writes Cangiano.

This book aims to provide a "step-by-step road map" to help developers plan, create, market, monetise and grow a popular blog.

book cover.jpg

Cangiano logically focuses on the need to keep "content as king" and encourages would-be writers to think about some of the mechanics and personal motivation challenges associated with developing a successful blog and keeping the substance appealing on an ongoing basis.

"You'll learn how to promote your blog, understand traffic statistics, and build a community. And once you've built it, you'll learn how to benefit from it: advance your career, make money from your blog, use it to promote your products or company, and take advantage of your blog to the fullest," writes Cangiano.

According to Cangiano -- Content is King, but Consistency is Queen.

NOTE: As a technical blogger of some years experience, its hard to comment on this book other than to say that it is interesting. It was still interesting for me to read as an "affirmation and verification" tool to make sure that I personally might be doing some of the right things if we accept Cangiano's word to be true.

What I do hope is that it encourages those programmers and technical professionals who really don't think they can write or blog to have a go. What Cangiano does NOT appear to cover (if there is one glaring omission) is the need to refine your grammar, vocabulary and general command of English -- but then perhaps that should be assumed as a given if you are about to dive in.

Software application development enters the litigious society

bridgwatera | No Comments
| More

It's always refreshing to see vendors talking about real issues on their product managers' and evangelists' blogs without trying to spin a press release out of every corporate gurgle and fart that percolates out of the company message set.

So it was that I came to read source code quality specialist Coverity's internal journal this week and came across the thoughts of product manager Jane Goh.

Goh recounts a recent visit to an RSA security conference session on "software liability" where she got to listen to revered "security gurus" (The Economist's words - not mine) Bruce Schneier and Marcus Ranum talk about how we should achieve better software quality in the future.

The question is, do we clamp down on software vendors via regulatory action (in the form of software liability) or do we let market demand settle the scores?

After all, now that we live in the American-inspired "litigious society" where people file lawsuits for their coffee being too hot, surely we should be able to hold software vendors to task more openly.

Goh says that Schneier came down on the side of software liability:

"[He took] the stand that software vendors should be liable for the malfunction of their products just as manufacturers of physical products such as cars, medical devices and chainsaws are held liable. Schneier argued that introducing product liability for software would dramatically improve the quality of software."

"Currently, software vendors are only concerned with insecure software costs that immediately impact them, not with the total costs of insecure software - for instance, the cost in millions of dollars from a data breach to the companies that use the software and to the end users who have lost personal and financial information. So dealing with externality costs here would improve things by moving costs to where it is most effectively spent, i.e. fixing the risk rather than just mitigating it."


Goh then details the fact that the other speaker Marcus Ranum took the side of market demand and said that if consumers continue to choose to settle for 'free and mediocre over good and expensive', then so be it.

"However, if corporations and consumers boycotted buggy software products by refusing to purchase them, this will put financial pressure on the software industry to change how security is viewed. He trotted out a car analogy saying that Japanese car manufacturers, by producing better quality cars, essentially destroyed the Detroit car industry. He argued that introducing more regulation was not the answer, as it would stifle innovation by giving unfair competitive advantage to large software companies who can afford the costs to ensure better software quality."

Interesting stuff indeed -- this blog was nearly titled: Excuse me; can I have a refund for my software please?

Will the market shift or stagnate from this point onward and do we need to fuel this debate with more fervour?

Subscribe to blog feed

About this Archive

This page is an archive of entries from May 2012 listed from newest to oldest.

April 2012 is the previous archive.

June 2012 is the next archive.

Find recent content on the main index or look in the archives to find all content.