March 2012 Archives

Salesforce.com ups software application development ante

bridgwatera | No Comments
| More

Saleforce.com has set the dates in stone for its annual Cloudforce event this year on 22nd May 2012 at ICC ExCeL London. Labeled as the Cloudforce Social Enterprise Tour, this year's event is distinguished by its heavier developer streams.

In what is being described as "extensive" developer sessions and zones, programmers are encouraged to learn how the company's Social Enterprise Platform really works.

According to Salesforce.com, "The Social Enterprise Platform lets companies of any size engage with customers and employees in dynamic and interactive new ways."

The platform brings together of Force.com, Heroku, Database.com and Site.com, which combined provide a route to building enterprise applications and websites in the cloud.

Note: Heroku (pronounced her-OH-koo) is a cloud application platform with services that claim to lets programmers spend "100% of their time on their application code" i.e. not managing servers, fussing over deployment tasks and/or concerning themselves with ongoing operations or scaling.
heroku-logo-light.png

Cloudforce itself kicks off with an opening keynote address, where salesforce.com says it will share "new innovations" (as opposed to old ones?) around the Social Enterprise.

Developer sessions will include:

• Customise your social enterprise with the Salesforce platform
• Using Remedyforce to manage your IT help desk
• Siteforce - for building social websites
• Developing native iOS apps with the Force.com mobile SDK


The cloud is ready, what are we waiting for?

bridgwatera | No Comments
| More

Computer Weekly has this week reported the results of a Vanson Bourne survey which identified that 93% of financial decision makers believe cloud computing will be important to the success of their businesses over the next couple of years.

Meanwhile, back in the data centre....

Hosting provider Rackspace has polled IT teams from mid-sized UK and US businesses and found that these firms spend over half (56 per cent) of their time on server management and troubleshooting in a typical month -- and only 28 per cent on strategic, 'value-added' activities.

So we beg to answer the question, why are in-house IT functions still chained to server management when the cloud opportunity it out there for the taking?

Why, indeed, are firms clinging to physical servers?

Rackspace's Cloud Reality Check survey suggest that the majority of UK companies surveyed (59 per cent) admit they have either bought too many servers, which has wasted money, or bought too few, which has meant a lack of capacity.

Hoping for progress...

Given the amount of media fuzz we hear on the subject of interconnecting software application development needs and the responsibilities of the operations team whose job it is to look after deployment and ongoing maintenance, one would perhaps have hoped for more progress in this area up to now.

Fabio Torlini, VP of cloud at Rackspace has pointed out that the problems associated with having to manage and maintain servers are often readily solved by cloud and managed hosting services.

Fabio-Torlini.jpg

"In 2009, one-third (33 per cent) of businesses surveyed expected to outsource their in-house servers in the next two to five years. However, over two years later, the new study suggests that many mid-sized businesses are still chained to their servers, and may be spending unnecessary time and money on them," said Torlini.

Computer Weekly's original report on this subject linked here quotes Google's Thomas Davies who reveals that after initial cloud adoption was driven by the IT function, today his company is speaking to CFOs, COOs and CEOs with view to embedding cloud advantages into customers' technology stacks for commercially-driven reasons.

Compound Google's comments with Rackspace's finding which show that a large percentage of UK and US IT decision-makers say they are under mounting pressure to support business growth and change (89 per cent), improve flexibility (88 per cent) and help drive internal innovation (88 per cent) and it's hard to fathom why adoption has not been as stridently forward-looking as many in the industry suggest it should be.

Rackspace's survey suggests that top barriers to moving to cloud hosting are down to questions regarding security (54 per cent), reliability (47 per cent) and ROI (42 per cent).

According to Torlini, the way the cloud and managed hosting market is maturing, as suggested by the study, represents a challenge to end-users and cloud and managed hosting service providers alike.

"The challenge for mid-sized businesses is to stop unnecessarily holding onto their in-house physical servers, and give themselves a chance to focus on more important and valuable work. The challenge for cloud service providers is to provide the right advice and services to help more of them overcome the barriers to doing just this."


Free 'Big Data' database for students & academia

bridgwatera | No Comments
| More

MarkLogic has produced a free of charge Academic License for students and educators to gain access to its operational database technology for mission-critical Big Data Applications.

As with any "apparently altruistic" move of this nature, the company is arguably quite keen to seed usage of its technology for future generations of commercial users.

By learning how to implement technology tools designed to handle Big Data, MarkLogic hopes that students will have the skills necessary to unlock additional job opportunities after graduating.

"We're very excited to see the types of applications that are built with the MarkLogic Academic License," said Keith Carlson, EVP and COO, MarkLogic.

"There are a lot of students around the world that are capable of building remarkable applications, and putting the power of MarkLogic in their hands will lead to some very compelling use cases."

The MarkLogic Academic License is designed for Big Data research needs and has no data storage restrictions. It can be installed on clusters with hundreds of machines and petabytes of data.

The new license follows closely behind the release of MarkLogic Express, a license that allows developers to download a free version of MarkLogic and build production applications.

mark.png

Devon Healthcare develops bespoke patient record apps

bridgwatera | No Comments
| More

The Northern Devon Healthcare Trust has this month teamed up with specialist software developers Blue Diamond and NDL to roll out a mobile working project to involve around 800 community nurses and therapists.

The Trust will make use both of NDL's universal integration technology (awiSX) and its mobile working software platform (awiMX) to launch the Department of Health Community Information Data Set (CIDS), a Department of Health directive for all community health providers, in April of 2012.

According to NDL, The awiMX toolkit allows non-technical users to both build and distribute bespoke apps to smartphones or tablets using Android, BlackBerry or Windows operating systems.

Northern Devon Healthcare Trust will use the software in conjunction with its bespoke in-house patient information system for community health workers. This system will allow nurses and therapists to access and update information hosted on a back office system via devices such as smartphones or tablets, while they are visiting patients in the community.

Keri Storey, assistant director of health and social care at Northern Devon Healthcare Trust has explained that by using lightweight mobile devices, nurses can input patient data at the time, rather than having to wait until they are back in the office.

"This pioneering venture puts us at the forefront of NHS Trusts using mobile technology to create a real-time record of patient care which can be accessed by multi-disciplinary teams. This supports the Trust strategy to help people live healthily and independently in their own homes and will make a huge difference to how we are able to care for patients in the future," said Storey.

1 galaxy-tab with compas3.jpg

"Although the 3G network has improved, in rural areas like Devon it's impossible to rely on it completely. The fact NDL's applications will continue to work on the device, even if there is no 3G signal, is vital."

Data is then synchronised automatically when signal becomes available. No patient data will be permanently stored on the devices so that confidentiality is protected.

ndht-151210-gh10.jpg

Beautiful mobile applications, beautiful user experiences Part 1

bridgwatera | 1 Comment
| More

In this guest blog for the Computer Weekly Developer Network, Sybase technical evangelist and mobile evangelist Ian Thain discusses the new mobile application landscape characterised by new and more beautiful user interfaces.

ITSmallPR.jpg

We've come a long way since the dark ages of computer programming, when business software was accepted and swallowed even if it looked dull and lifeless. Nowadays the best software beats its rivals by not only operating brilliantly, but also by looking visually appealing. I'd like to suggest that we have tech companies like Apple and Google to thank for this; and now that we're firmly moving into the mobile software era, this focus on beautiful and functional user interfaces is becoming even more important.

It doesn't matter if your application is the best news for business since the invention of the typewriter -- if it's horrible to use, then nobody will use it. If you want your users to love your product, you're going to have to make it look great as well as functional; and with the rise of the "prosumer" the demands on software application developers have become intense.

Call in the UX team

The key responsibility for any software interface design lies with the User Experience (also known as UX) team. It is their job to ensure that three crucial elements of the software work optimally: the front end user interface, the user interaction layer and the graphic design (i.e. the pretty icons and buttons).

If any one of these elements is out of kilter, the whole application can disintegrate and become a cumbersome mess. Ensuring that these elements work together properly requires a large amount of effort in terms of understanding the user and their needs.

"What does the user need to get out of the application and what are their expectations for ease of use and performance?"

Cross platform complexity

Mobile applications in particular need to deliver tightly integrated interfaces, which operate well across the professional and consumer space, if they're not to annoy the user and risk alienation. This becomes more complicated when we accommodate for different platforms (Android, iOS, Blackberry etc.), which each have their own programming requirements and guidelines for development. Even the use of third party development tools and platforms won't remove the need to understand how each platform performs under pressure.

The process of mobile platform development itself is not that complicated as long as the development team follow some basic common sense rules.

Rule #1 -- First don't think you can just transfer an existing computer application to the mobile arena by making it smaller.

Rule #2 -- Trying to shoe-horn complex multi-level navigational elements and traditional desktop computing paradigms onto a handset is just asking for trouble!

Rule #3 -- Experience has shown that delivering simple core functionality on a mobile device works significantly better than trying to offer a complex menu of features in the one package.

These introductory rules are especially important to remember with the growth of touch and multi-touch on devices like the iPad and other tablet computers.

In part 2 of this 3-part blog, Sybase's Thain looks at the importance of the Application Definition Statement (ADS).

Editorial disclosure: Adrian Bridgwater works in an editorial capacity for the International Sybase User Group, a completely independent association that represents thousands of users of Sybase products in more than sixty countries around the world. He is not an employee of Sybase but seeks to work with ISUG to support its work challenging and questioning Sybase product development and training.


Trending now: bloatware out, agile analytics & new programming languages in

bridgwatera | No Comments
| More

In what may or may not be a contrived means of generating media discussion on a trend bearing its company brand and moniker, software development vendor ThoughtWorks has issued what it calls it Technology Radar for March 2012.

This report cum debate cum publicity vehicle brings together the collected thoughts of the company's technology advisory board; a group comprised of its own senior software gurus and the ruminations other third party partners too.

Technology Radar seeks to provide insight into the techniques, tools, languages and platforms that are driving next-generation software development.

The company rejects bloatware vehemently - and rightly so.

Some of the latest "findings" include:

Techniques - Agile Analytics is a new field that seeks to apply agile techniques to deliver faster insights into Big Data as well as more relevant experiences for software end-users.

Tools - With the continued rise and acceptance of alternative data stores, commonly known as "NoSQL" databases, the notion of Polyglot Persistence has emerged. In many applications it makes sense to store data using more than one data store, based on the use cases and efficiency required.

With much more emphasis on engineering practices around client side code, JavaScript tooling and micro-frameworks are highlighted in this edition of the radar.

Languages - The industry is experiencing something of a renaissance in software programming languages and the specifically calls out the need to care about languages.

Functional programming languages such as Scala, Clojure and F# are on the rise, along with related technologies such as ClojureScript and Google's Dart. 



"The radar represents discussions between globally diverse technologists. These debates are invariably impassioned and provocative, and the radar provides an external outlet to their outcome. While we try to stay on the cutting edge of technology, we also approach technology pragmatically, always trying to tie it back to real business value," said Neal Ford, director and software architect at ThoughtWorks.

Dart-logo.pngClojure-glyph.svg.pngScala_logo.png

Adobe cranks web developer arsenal with personalisation tools

bridgwatera | No Comments
| More

Aiming to extend its reputation amongst the web developer and designer communities, Adobe has updated its Web Experience Management (WEM) offering with a set of product augmentations designed to drive sites that engender "web engagement with optimised (and personalised) landing pages" plus social tools.

Crucial updates in the Adobe CQ 5.5 suite include the 'Client Context' and 'PhoneGap' options.

These tools have been engineered with the intention of giving web developers greater personalisation power to contextualise website information to an individual user based upon their profile, as long as the user is happy to 'login' and/or 'sign up' to the site's services for regular visits.

Client Context will extend to recognise not only a user's interests and browsing preferences, but also build in awareness of an individual's preferred devices and location should they wish to submit that information.

Adobe will also now include HTML5 mobile application publishing support for the simultaneous launch of content across websites, mobile sites and smartphone and tablet apps.

Also of note here are new enterprise-class Digital Asset Management (DAM) tools, which are integrated with Adobe creative tools and the forthcoming Adobe Creative Cloud -- a solution intended to streamline creation and re-use of rich media.

Adobe Creative Cloud will be available worldwide in the first half of 2012.

As previously discussed on Computer Weekly, Adobe defines "experience management" as the process of proactively identifying application content and managing its best delivery to users according to device, platform and application.

Adobe UX.png

Image: Adobe CQ5 in use: "agile marketing meets rapid development"

Adobe's VP of product strategy and solution marketing Kevin Cochrane suggests that web content management as we previously knew it simply doesn't cut it any longer.

"Not for CMOs tasked with enhancing corporate image and contributing to the bottom line in an age when the expectations of the CEO and customer have never been greater."

Perhaps more pertinently, Adobe's Cochrane also says that consumers now expect highly pertinent information at the right time, whether on a desktop, tablet, or smartphone.

"Our Web Experience Management solution and Adobe CQ 5.5 were built with this in mind and provide the power to personalise each experience, across channels, to build brand, drive demand and reach new audiences," he said.

... and your hyphenated takeaway here is?

Remember how you've heard expressions like state-of-the-art, best-of-breed and out-of-the-box one too many times now?

Get ready for "contextualised-content from channel-to-­channel on a device-to-device basis" -- this is the way of the new web.

Or at least it will be if Adobe exerts its influence effectively.

Data, today, is nothing without analytics

bridgwatera | No Comments
| More

Data is nothing without analytics. Or to be more specific, so-called Big Data is nothing without Big Analytics.

Those are not branded or trademarked statements. Neither are they marketing taglines or advertising slogans -- and to be quite honest, that's just a bit surprising.

Big "enterprise" scale IT vendors from SAP, to Oracle, to IBM etc. can barely go a press release without mentioning analytics today. This is the world of almost 'added value data' if you like.

Or at least, this is data where additional value has been extracted from it.

As we know, IBM's Sam Palmisano famously re-aligned his firm's focus towards services at the start of the millennium. As the firm's desktop and laptop division gently played out its swansong, the IBM Global Business Services division grew into the global behemoth (in a nice way) that it is today.

So as Big Blue now rolls out its latest batch of analytics software with new predictive technologies from (and I quote) "dozens of companies IBM has acquired", where do programmers, team leaders and (of course) C-level decision makers look for the pressure points and USPs in these new offerings?

IDC estimates enterprises will invest more than £75 billion by 2015 to capture the business impact of analytics, across hardware, software and services.

IBM combines consulting services and software in this field to (and yes this could become the firm's tagline) -- transform big data from a threat into an opportunity, one that will be their most valuable natural resource.

Here's a practical example of how the firm's Smarter Analytics Signature Solutions technology could work...

"Each year, health care fraud tops US$250 billion, according to the FBI. Tax fraud costs billions more. IBM's adaptive systems learn from the latest data, helping to protect against emerging fraud. The solution embeds advanced algorithms directly into business processes, providing government agencies and insurers with the ability to detect fraud in real time - before funds are paid out. Using sophisticated analytics, the solution recommends the most effective remedy for each case, optimising an organisation's finite resources. For example, the system might recommend that a simple letter requesting payment be sent to resolve one case, while recommending that a full investigation be opened in another case.

big data.jpg

You can view the full IBM Infographic on Big Data here, an excerpt is shown above.

The company says that software application developers tasked with implementing these kinds of technologies will be able to plug into applications management services to help "architect in" a Big Data platform at the enterprise level.

END NOTE: If IBM does want to officially brand "data is nothing without analytics" then please drop me a line -- my rates are reasonable.

FreeCause Codinization project, a first taste of programming?

bridgwatera | No Comments
| More

Loyalty and customer engagement company FreeCause has launched an initiative to mandate that all its employees must learn how to code by the end of 2012.

The Codinization Project uses the Codeacademy platform, a web-based programming tutorial employing an interactive library of JavaScript, Ruby on Rails and Python courses.

It also provides companies with tools to create their own "custom content" specific to their technical and industry needs. The programme tracks an employee's results and allows users to follow the progress of fellow employees, providing (in theory) motivation and engagement for a company's workforce.

FreeCause asserts that to date more than one million people have signed up on Codeacademy.

FreeCause's "Codinization Project" was inspired by its parent company Rakuten's "Englishnization Project". An effort that Rakuten CEO and founder Hiroshi Mikitani drove to mandate all 12,000 Rakuten employees to learn to speak English.

"When we announced the Codinization Project, we did so because the Rakuten showed us what was possible - that challenging yourself and rising to the occasion was something they had done by learning English - and our goal is similar - to also learn a language - but this language is Javascript," said Michael Jaconi, CEO of FreeCause.

"This initiative will underscore our dedication to inspiring loyalty through innovation, to building stronger technology, and also, to raising the collective IQ of our company."

codeacademy.png

Five software application development trends for 2012

bridgwatera | 1 Comment
| More

March is a little late for 'year-ahead' prediction stories; the looking back - looking ahead story tactic is usually reserved for the Christmas silly season when we're all a little more amenable to lighthearted (or serious) postulating.

So do technology prognostications ever warrant any credence? After all, it was American baseball player Casey Stengel who said sometime back in the 20s, "Never make predictions, especially about the future."

davidi_tiedye_sm.jpg

Of some note perhaps are the industry ruminations of David Intersimone, VP developer relations and chief evangelist at Embarcadero, a company known for its software tools for application developers & database professionals.

Intersimone has laid down his top five major software development trends for 2012 backed by examples of how developers and enterprises are already starting to use these developments.

Trend #1 -- HTML5 vs. native applications in the context of desktop and mobile convergence. With the consumerisation of IT, enterprises are now looking to provision the more specialised B2B desktop applications (e.g. CRM, ERP) in the mobile environment - going beyond the standard applications such as email and calendars. For developers there is a dilemma - should these mobile applications be developed in native code or using HTML5?

Trend #2 -- Cloud computing. For developers cloud computing presents a huge opportunity, but they need to understand the "use" case of cloud applications better -- it is not just about developing applications for the cloud, but also maintaining them.

Trend #3 -- Big Data and NoSQL. Relational databases are falling short in their ability to store and manage the exponential data growth, resulting in NoSQL databases gaining mindshare. As with most technologies, there are benefits and challenges. What approach can developers take to ensure easy storage and access to data for their applications?

Trend #4 -- Next generation user interfaces. Enterprise users are beginning to expect consumer style UIs including voice, touch, gestures and kinect in business applications. From a developer's perspective, delivering against this requirement is important as it will enable end users to get the most out of applications, greatly increasing adoption of the software. What constitutes next generation UI in the practical sense and what considerations must developers bear in mind when developing them?

Trend #5 -- GPU computing. Many business applications still offer limited intuitive and interactive elements - making them cumbersome to use and difficult to learn. Developers must take full advantage of hardware to drive rich and interactive business applications by maximising CPU and GPU usage equally to create visually-engaging, front-end applications; and ensure performance and connectivity to back-end systems and data.

Editorial note: If you heard that a database and developer tools company was trying to predict developer futures listing cloud, user interfaces and Big Data as key drivers, I wouldn't blame you for turning off. But Intersimone is clearly a code purist and developer champion without a single marketing flavoured bone in his body. Yes his agenda is somewhat led by his employer, but how can you not like a man in a smiley face T-shirt with a Twitter profile that lists his location as Planet Earth?

User Tutorial: ALM in SharePoint Online 2010 in Office 365

bridgwatera | No Comments
| More

In this guest blog post, Jeremy Thake, enterprise architect and Microsoft SharePoint MVP at Avepoint looks at exporting artifacts and some of the other real world issues you may face when taking a SharePoint project into a live production environment.

JeremyThake-thumb-200x323-140779.jpg

As discussed in the previous article, the original promotion of a v1.0 solution from development to production is relatively easy out of the box by saving the site as a template and restoring it. The problem with this, however, is that when you modify v1.0 in development to make v1.1 and are ready to promote it, you cannot use this approach because it will overwrite anything in production and therefore lose all production data.

Repeating development in production

The only out-of-the-box approach to promote subsequent versions is to repeat all the steps made in development in the production environment. This in itself introduces risks, such as incorrectly repeating steps with misconfiguration or simply omitting steps that are later discovered.

Some artifacts can be exported individually and imported into existing sub sites relatively easily:

• Copying and pasting file contents from one SharePoint Designer 2010 window to another, from one sub site to another

• Exporting web parts from pages and importing them onto the target pages

• Exporting and importing a declarative SharePoint Designer 2010 workflow from one sub site to the other

Configuration settings in existing sub sites, content types, lists and list items require a manual export/import out of the box. For simple solutions, this could be a matter of minutes. However, in more complex solutions this may require too many steps and lead to downtime of the existing solution in production, potentially causing business disruption issues.

Sandboxed solution...

A way to automate these changes from one version to another is to leverage the advanced tier of Visual Studio 2010 development of sandboxed solutions. One thing to take into account with this approach is that this will still require you to (declaratively in XML, and imperatively in managed code) write the incremental changes to go from one version to the next.

Sandboxed solutions are really a sub set of full trust solutions that are built on on-premise environments. Why? Certain managed code server-side APIs are not available for use, such as elevated security permissions.

There are two important reasons for this:

• First, there are security concerns that the Office 365 SharePoint 2010 Online multi-tenant environment and managed code accessing site collections not owned by the customer.

• Second, the managed code blocks contained in sandboxed solutions are executed in their own worker process and monitored for certain resource counters such as the number of exceptions thrown and CPU cycles consumed.

This allows SharePoint to disable sandboxed solutions that consume more than their limit and therefore do not affect other site collections and customers within the same SharePoint multi-tenant farm.

Visual Studio 2010 SharePoint Sandbox solutions support Team Foundation Server (TFS) as well as other source control providers for source control. One of the core tenants of application lifecycle management is continuous integration with automated builds based on source control check-ins.

This immediately becomes very complex in SharePoint 2010 Online and simply cannot be done without the advanced tier. In addition to the sandboxed solutions, a strong knowledge of PowerShell is required to remotely automate these builds into environments.

There are vendors that produce products that will automate this promotion of artifacts from one environment to the other for those without the appropriate resources.

In the next article, we will discuss migrating solutions from SharePoint 2010 on-premise to SharePoint 2010 online.

Software application quality lessons on a Hawaiian yacht

bridgwatera | No Comments
| More

Software application testing company Coverity held a press and customer breakfast meeting yesterday morning in the City of London area to analyse the code quality refinement work carried out by Adobe, a customer of its products.

After a full English, half a bottle of HP sauce and a full pot of coffee, attendees got the full spiel from Todd Heckel, director of engineering and quality at Adobe.

Heckel explained that he and his team were focused on software application development testing across the Adobe suite -- the sort that leads to real and tangible actions that can be carried out with a view to removing defects that exist in any product.

This can include peer code reviews, unit testing, static code analysis and as many as four or five more stages of code "filtering"... and it is this filtering element that provides the colour for our story.

The Watermaker Lesson

Heckel and his wife are clearly outdoor adventurous types; they embarked upon a voyage to Hawaii from California some time back with all the kit they needed on a lovely vessel, which came with its own "watermaker", a mechanical filtering system capable of turning seawater into drinking water.

It was a 18 day voyage, but on day nine, the watermaker packed in.

The system worked by removing larger pieces of kelp from the water to start with and then gradually carried on filtering to clean out smaller and smaller particles, eventually getting down to a small membrane that could remove salt and finally make fresh water.

But it turned out that a membrane was broken at the initial stage that let big kelp chunks all the way into the final clearance process! The result was system meltdown.

So what's the lesson here for software engineering? You have to get the big problems out of the way first or they risk killing the project mid lifecycle.

Hawaii-07.jpg

So back to the real world, Adobe says that it spent considerable effort on defect testing in 2011. The company now hopes to use Coverity products to realise a $35 million dollar saving in 2012.

But, says Heckel, static code analysis is relatively new to teams at Adobe in regard to their working with Coverity's tools, so one might expect the defect detection improvement curve to be exponentially positive as time moves forward. "So far Adobe has found 5000 defects through using Coverity," said Heckel.

Heckel said his teams enjoy the tool's ability to find "real and meaningful" issues and that he places great importance on metrics. So much so that testing and defect tracking without accurate metrics are not worth so much...

Heckel also had a message for staff management in the testing arena. "Team managers should not use early defect data for individual programmer performance management, this potentially leads to individuals learning to hide defects early so that they are not picked up on individually... It is always more important to analyse team performance as a whole," he said.

All in all a good morning and a few lessons were learnt.

Lesson #1: If you're sailing to Hawaii, check your water filters ALL the way through before you leave.

Lesson #2: Software testing early rather than later is ALWAYS the most prudent course of action.

Lesson #3: From a timeline point of view, if a software bug has been around for 10 years and it is not causing problems then it probably isn't an issue, but triage tactics to address serious problems first are still applicable.

Lesson #4: Even Adobe's seemingly polished products still need professional testing tools applied to them.

Lesson #5: The Mercer Restaurant on Threadneedle Street knocks up a decent Full Monty and even serves HP sauce. A slightly poor show on the teeny mustard pots though; will flag this as core defect if I am invited again.

Image: Spectra Watermakers

Watermaker.png


Unisys AMPS: management consultancy for software applications

bridgwatera | No Comments
| More

Not a company to shirk at the opportunity to create a lengthy acronym, Unisys labels one of its core brands as AMPS SOA.

Application Modernisation Platform-as-a-Service for Service-Oriented Architectures (AMPS) for SOA is the company's approach for firms looking to undertake complex application modernisation projects.

In some ways, this is akin to a management consultancy service for enterprise applications i.e. when want to know which way to steer the corporate IT ship, you bring in some higher level direction and control.

But when and why do we need application modernisation?

Application modernisation in this instance might result from a much needed refocus on service oriented (or orientated) computing, which will logically sit comfortably with the cloud computing model of service-based computing.

Unisys offers a subscription-based service with a "pre-built software platform" designed to enables organisations to "jumpstart" their application modernisation initiatives without the need for upfront software licensing, hardware and systems integration investments.

"The pace of business today demands that organisations respond in near real-time to changing customer and competitive requirements - and yet most businesses and agencies continue to be constrained by old, inflexible applications that can take years and cost a fortune to change," said Andy Gordon, AMPS director, Unisys.

AMPS includes an initial assessment service, a subsequent strategy element where Unisys SOA experts work with the client's internal team to help craft an SOA enterprise-wide strategy.

There is also a governance model and an operational model for onward application development and modernisation work.

AMPS_overviewtab_image.jpg
Image: Unisys

Unisys uses strong terms to describe the fifth element in AMPS, which is focused on full application lifecycle operational support:

"Unisys works with the client to create an ongoing operational model, institutionalise the governance model across the enterprise, and provide the support structure for information systems backup, integrity, contingency, incident response, maintenance, and awareness and training processes," said the company, in a press statement.

Plastering over the cracks of technical debt in software development

bridgwatera | 1 Comment
| More

This Computer Weekly Developer Network blog is a guest post written by Michael Vessey, a senior consultant with software quality provider SQS.

If you do just enough to keep your car running -- i.e. you just check the oil and tyre pressures but never take it in for a proper service -- eventually, something bad will happen and your car will break down or become very expensive to run.

Similarly, with Agile methodologies like SCRUM, if you only ever do 'just enough' to build software that is ready for market, you are probably building up a technical debt that will need to be paid.

What is technical debt?

Technical debt is not a new idea in computing circles; it does however seem to have come into fashion lately. To understand technical debt better, consider how your house may have been affected by the high winds across most of the United Kingdom in January. Many houses may have suffered structural damage such as cracked brickwork and fallen chimneys due to structural movement. Let's consider these cracks as technical debt; the building still performs its intended purpose, however we know we will have to fix them at some point

The right thing to do is to have the damage inspected and the structure of the building repaired before any damage spreads and the cost of repair increases. Instead, if we often chose to cover over the ugly cracks and render the outside of the house so that we lose all visibility of how the cracks are developing underneath the rendering.

In 1992 Ward Cunningham summarised this issue for software engineers:

"Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite... The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organisations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise."

With more and more companies switching development from traditional waterfall methods to Agile and SCRUM we are seeing development team leads focusing more intensely on technical debt.

The main benefit of Agile to these companies is that you can release small portions of the product at early stages of the lifecycle by doing "just enough" to get the component ready for market.

But -- by doing "just enough", the risk of technical debt is increased.

Static analysis can help development leads monitor some aspects of technical debt. Microsoft has recognised this and incorporates static analysis in its development tools. It is incredibly easy to run code analysis within Visual Studio 2010, however understanding the results can be more complex.

The results of even a basic code analysis can be very noisy, with lots of warnings making it difficult to spot the most important issues. Even simple sections of code can generate results from a "basic correctness" analysis.

Take the following snippet of designer generated code:

private void InitializeComponent()
{
this.SuspendLayout();
//
// Form1
//
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.ClientSize = new System.Drawing.Size(284, 262);
this.Name = "Form1";
this.Text = "My First Form";
this.ResumeLayout(false);
}

The basic correctness rule set throws up the following warning:

Warning - CA1303 : Microsoft.Globalization : Method 'Form1.InitializeComponent()' passes a literal string as parameter 'value' of a call to 'Control.Text.set(string)'. Retrieve the following string(s) from a resource table instead

While this is valuable information, it's all too easy for anyone with the correct knowledge to suppress the warning, hiding it from your automated static analysis. The line of code below will do this.

[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Globalization", "CA1303:Do not pass literals as localized parameters")]

Some developers might be tempted to 'paper over the crack' by suppressing for valid reasons, for example: to focus on other messages, there may be SQL injection warning but the developer seeing the warning is not a SQL developer or there may be an internationalisation issue and the developer is unaware that the company will later be launching a multi-language.

Suppressing warnings does, however introduce two major risks. The first of these is that a developer can choose to hide issues from a code inspection, the second is that the scope of the suppression does not extend to just a single line of code.

If a new instance of the same warning is introduced into the same section of code then you have two instances of technical debt and yet no new warnings; new cracks can appear and you would not be aware of them because they are hidden beneath the paper.

When managing technical debt issues in code, just as with cracks beneath the rendering of a house, we need to carefully consider the implications before suppressing errors. If we fail to take the proper steps and care, we won't know the extent of the problem -- until something as disastrous as a wall falling down occurs.

These tools are provided for a reason. Make sure you don't plaster and paper over the cracks and hide your technical debt.

Healthcare IT & the machine that goes ping! But only once...

bridgwatera | No Comments
| More

There have always been 'issues' when it comes to technology in the healthcare market.

• Security of patients' records and the move to make these electronic has been a matter of constant debate.

• Implementations of technologies such as the Qt application and GUI framework in healthcare IT solutions have been extensive and successful.

• The log in/out process for doctors using applications that need multiple sign in procedures has been a thorn in the side of healthcare IT. We need a machine that logs in and goes ping! for sign in, but only once.

It is this one ping, this one log in, this one application sign in issue that is the crux of the argument here.

So what can we do and can software application development save the day?

Here's the problem...

If you take your PC for example... imagine if you had to log into every application you used, Word, Excel, Outlook, Internet, Twitter etc. Well in healthcare, it is much the same but with higher security protocols -- for example, a clinician needs to log into different applications to retrieve patient files another one to access x-rays etc. All of these applications must have a different username and password, which causes havoc with IT departments.

Imprivata is a provider of technology to the healthcare market, which simplifies log in's and log outs of IT systems. The company provides 'Single-Sign-On' technology so that just one username and one password will allow a clinician to access the applications they need.

Here's the cool part -- this can be achieved with either a password, a smartcard, a finger print or facial recognition. The company calls it No Click Access.

... and the news angle?

Imprivata's new developer programme allows 3rd party vendors to take this OneSign technology and integrate it with any device they want -- in a healthcare environment, this could be bedside terminals, printers, anything that the 3rd party wants to integrate.

"The Imprivata developers programme will essentially allow 3rd party vendors to incorporate Imprivata's technology into any hardware they want. It provides access to the OneSign ProveID™ Software Developer Kit (SDK), training, support and the Imprivata Ready certification program. Through the ProveID APIs partners can integrate directly to the rich set of OneSign capabilities including out of the box support for a broad range of authentication modalities and devices, automated password change management, policy engine, event logging and reporting," said the company, in a press statement.

So with Imprivata, will healthcare staff be able to sign in and ping! just once?





Software developers "demand" social networking tools

bridgwatera | No Comments
| More

Software application developers love community. The existence of forums, developer programmes, user groups and now social networks have always formed an intrinsic part of the way programmers come together.

But why is this so?

It could be an inherent mistrust of the vendors' technologies, which they have to engineer with on a day-to-day basis. It could be because they feel that they speak their own language and they like to form a social clique. Or it could be just because it's a cool way to stay connected.

From early BBS bulletin board services dedicated to the sharing or exchange of messages or other files on a network -- right through to social networks in the form that we know them today, these "systems of collaboration" have always appealed with particular popularity among the developer cognoscenti.

So where does this leave us in 2012?

While it used to be sufficient for vendors to host developer programme sites with technical support options, so-called "knowledge base" information and hard and fast tooling options including SDKs, now - "two-thirds of developers want and expect social networking features to be included in developer relations websites."

These are the findings of the Evans Data's Developer Relations Survey, 2012.

Social_Network.png

The survey of over 400 software developers also found that developer activity in social networks has increased by over 60% in the last two years and that 74% visit social networks at least several times a week.

"Social networking transformed the landscape of the web and developers have embraced the paradigms that define a social network." said Janel Garvin, CEO of Evans Data, "We've seen the interest level in social features rise dramatically among software developers in the last two years, until now any vendor with a developer program had better be providing features to stimulate an active social community"

The survey showed that the most important features that developers look for in social networks are:

• active communication with peers such as blogs,
• chat and an active community,
• tagging, dashboards and bookmarks are important, but less important on balance.

Subscribe to blog feed

About this Archive

This page is an archive of entries from March 2012 listed from newest to oldest.

February 2012 is the previous archive.

April 2012 is the next archive.

Find recent content on the main index or look in the archives to find all content.