Why it pays not to skimp on testing

Getting your e-business up-and-running quickly is important but testing that it can handle the load is vital. Failure at key...

Getting your e-business up-and-running quickly is important but testing that it can handle the load is vital. Failure at key junctures can cost you dear, writes Nick Booth

In the race to get online, companies are often tempted to skimp on the amount of testing they do. The assumption is that speed-to-market is the primary consideration. Not true. In a competitive environment, speed to market is important, but nowhere near as vital as holding onto your customers. If your site crashes, or runs slowly, you've got no chance of this.

There's nothing new about developers rushing the unfinished article onto the market. It's practically a pre-requisite in the software industry. This is why few IT managers buy the first version of any Microsoft product that's launched. Practically all IT projects run over their deadline for system development and often some time is recouped by skimping on the amount of testing that takes place before the system goes live.

Actually, the slogan-mongers who say e-business is a totally different ball game are half right when it comes to testing. You can skimp on the testing of a client server project development and, if anything goes wrong, it's unlikely to be a major catastrophe. Launch an untested e-business on the market and, should the system not work, it can be fatal.

"If you've gone live with an internal application and it falls over, your users will report it and you'll be able to fix the problem," says Gareth Evans, marketing director of testing tools maker Cyrano, "If it's an e-business, your customers won't tell you and they won't come back."

Doing business on the Internet involves much wider risks because you've no idea how many users you are likely to have and, given the range of platforms that will run your system and the number of devices used to access it, "guesstimating" performance will be incredibly difficult. This is why it is far more important to get an e-business platform tested as thoroughly as possible before launch.

"You only get one chance in e-commerce," says Evans. "Blow that and you can kiss good-bye to all that money you spent trying to make people look at your site."

In the business to consumer e-commerce sector, considerable amounts of money have to be spent on marketing to gain customers. Often, the marketing campaign has booked TV and radio advertising that is time-dependent, which explains why some companies have been tempted to launch their sites without rigorously testing their capability.

Theconsumer goods site Jungle.com, for example, ground to a halt on the day of its launch. Its founder, Steve Bennett, attributed this to the company being a victim of its own success. Having spent an estimated £10 million on advertising and marketing, more users than had been predicted flooded the site and thousands abandoned their purchases in disappointment. Since Bennett has a background in IT - he once headed an IT distribution company - it's probably kinder to say that the marketing spend forced him to launch the site on schedule, even though it clearly hadn't been tested accurately.

"E-businesses that flop on their launch aren't victims of their own success," says Evans, "they're victims of their own stupidity. Even if you don't know how many customers you have, you can fairly accurately gauge the demand and prepare for it."

The essential disciplines of testing an e-business's robustness are the same you'd apply to the previous generations of computer system. There are four key issues, argues Evans: function, performance, availability and security.

Jungle.com failed because it did not have enough bandwidth to cope with all the customer enquiries that were flooding onto its servers. The computing resources and network capacity needed could have been calculated more accurately if the company had made a better guess at the number of users expected on the site.

The Halifax share dealing Web site is a good example of an e-business that didn't pay enough attention to function. After making a change to the application, the site's managers went live without testing the impact of the new routine on the system. Consequently, end users found that they could suddenly access, and even trade, other customer's share portfolios. A breach of security that is hardly likely to inspire confidence among Halifax's customers, though they claim to have only lost three customers after this episode.

"This was a fundamental mistake. You have to test a site every step of the way," says Evans. "Every incremental change necessitates the site being tested again."

If this sounds laborious, it also offers the key to making the test process more streamlined. You can test every module of a system as it is being developed. When you think you have the finished article, all that needs testing is the integration of all these modules, says Nange Yianni, technical manager for testing at Compuware.

But isn't the testing of software modules in isolation meaningless? Surely it's the finished article, where they are all integrated and interrelated, that is relevant.

"Yes, the finished article is important, but you can get 90 per cent of your system testing finished before you reach the final stage," he says. "That way you don't have to enter into a lengthy period of testing and risk holding back your launch date."

One of the keys to assessing performance lies in not just guessing how many users will hit your site - which can never be an exact science - but what type they will be. Casual visitors, for example, make much less of a demand on your system than committed buyers. Once a visitor starts to make a purchase, they move from being a passive observer of published information, to someone interacting with the company database and order purchasing system. If you played safe and assumed all visitors were buyers, you would be making an expensive mistake, because the amount of processing power and server capacity bought in would far exceed your needs. "E-businesses are loath to spend their budget on capacity they don't need," says Yianni. "After all, there's so much more they could be spending that money on. Like marketing and security."

Before going live, he advises you to allow selected customers to use the site and observe patterns of usage. This will tell you, for example, what percentage of users are involved in transactions at any one time, which pages are the most popular, which products most in demand and which parts of the application seem to function most laboriously. Some companies launch their sites without making their existence widely known. This gives them time to study the site in action before their marketing machine starts to buy them customers.

The real challenge lies in integration of a traditional business system with its new Web presence. If the clicks-and-mortar traditional businesses enjoy the advantage of having an established culture of system testing, this is neutralised by the complexity of their e-commerce systems. "The trouble with adding a Web front end to a traditional back end business system is you end up with a network that's an eclectic mix of computing environments and platforms," warns Chris Ambler, global test leader of Tanning Technology.

In these circumstances it becomes very difficult to identify where a bottleneck is building up as orders are taken on a Web server, passed to an inventory system that resides on a mainframe and then made to link up with, say, a finance system on another completely different type of server.

This is why many of the testing tools for e-commerce fail to highlight problems. They don't look at the big picture, according to Colin Armitage, MD of Original Software, which claims to tackle the real problem facing clicks-and-mortar e-businesses. "It's all very well testing the user interaction of your site but that doesn't tell you how good it is as a business tool, " says Armstrong. "What good is a two second response time if the goods don't actually get delivered for two weeks?"

The present generation of testing tools aren't good enough, argues Armitage. They were developed for the less complicated world of client server, and modified for e-commerce. Unfortunately, they don't take into account the complexity of the network over which a typical e-commerce application runs.

Most testing tool vendors offer record-and-playback facilities, he says, where every possible transaction that can be made on a site is mimicked by a software routine, and response times monitored. What businesses really need is accurate operational feedback, he says. "The new generation of testing tools will now give an accurate picture of the whole chain of events, from initial submission, to order processing to despatch of the goods," says Armitage.

If Tesco's home shopping experiment hadn't relied on record-and-feedback tools, he says, it wouldn't have run into the problems it experienced on launch. The problem Tesco had was that half its information seemed to get lost between the Web server and the mainframe used to control stocks. Armitage advises companies to simplify the process by using the same platforms to run their Web server as they would to carry out the heavy duty number crunching at the back end, the traditional mainframe and mini computers. Using an IBM AS400 at both ends, for example, will allow you to see everything from initial submission to order processing to despatch, says Original, which sells AS400 testing tools.

In an environment where customer loyalty is nil, the integrity of a system is critical. The further down the development line you get before identifying problems, the more expensive they are to rectify.

Catch a bug in design, says IBM, and it will cost you $1. Catch it in testing and the cost is $10. Catch it in systems testing and it's $100. Once it's made it onto your system it will cost you $1,000. They haven't arrived at a figure yet for how much it'll cost you if it's on your e-business system. Which in a way, sums up the whole testing process in e-commerce. It hasn't been able to keep up with the pace of change.

The ten disciplines of E-system testing

1. Architectural planning

2. Site reporting

3. Database tuning

4. Code analysis and tuning

5. Representative staging

6. Load and regression testing

7. Server tuning

8. Spike readiness

9. Recovery planning

10. Monitoring

Tips on testing

  • Expect the conversion rate of customers to online access to be anything between 5 and 20% in the first half year of trading for both B2B and B2C sites

  • Over time, as customers get used to how to order, the number of clicks per order is reduced, but this is counter-balanced by new people coming to the site

  • In calculating the hit rate, you have to take the browsing/ordering ratio into account. This is the number of users that just look at a catalogue compared to the number of users that actually order. In a B2C environment, this is usually 1-3%, in B2B 5-60%

  • Load distribution. Assume that 50% of hits will happen within two hours. Usually at about 4 o'clock in the afternoon

  • Add all these variables to get a peak rate figure for hits per second, and for B2B Web sites multiply this by two, and that is what you test for. Set yourself a maximum response time that you will accept and it should be no more than 2 seconds per page

  • Set up a scenario where you access the site from various areas to hit your catalogue and ordering interfaces to check that they can cope. With B2C Web sites, multiplying your peak rate figure by two may not be enough, depending on your marketing effort. Do not start serious marketing until the site is up and running and tested. If your site once doesn't work or is slow, customers just move on to the next one - there is no such thing as loyalty

  • At first, market the site to a select number of customers, monitor the hits - then roll out to the rest of the customer base

  • Hardware and software must be scalable - if you go for a hosted service, ensure that you are not locked into using a particular piece of hardware. Make sure you can add to it

  • Integration with ordering and distribution systems is vital. If the order placed on the Web site generates an e-mail, and someone re- keys that order into another system, it is likely to be error-prone and time-consuming, so these links need to be thoroughly tested too. Everything must be integrated. You must not go live without your order fulfilment in place

    The UK's three biggest testing failures so far

    Site: Tesconet

    Type: Business-to-consumer, goods

    Failure: Goods not delivered for weeks, lost orders

    Result: Lost customers

    Reason: Lack of integration between Web server and mainframe

    Site: The Halifax

    Type: Share dealing online

    Failure: Application bug allowed clients to access other clients' accounts

    Result: Lost customers. Potential fraud

    Reason: Lack of application testing

    Site: Jungle.com

    Type: Business-to-consumer, home entertainment

    Failure: Slow response times

    Result: Lost revenue estimated at £100,000 - according to MD Steve Bennett

    Reason: Lack of capacity planning

  • Read more on Business applications

    Start the conversation

    Send me notifications when other members comment.

    Please create a username to comment.

    -ADS BY GOOGLE

    SearchCIO

    SearchSecurity

    SearchNetworking

    SearchDataCenter

    SearchDataManagement

    Close