Performance testing is often seen as an ideal, but unreachable goal. What most companies do not realise is that system performance can be improved with simple techniques and limited time and budgets - you can even download high-quality testing tools free from a number of websites.
The various terms used to describe performance and performance testing are often mixed up by people until it becomes almost impossible to understand what is meant by a particular test.
The British Computer Society has commissioned a study group to define performance testing in addition to various non-functional testing such as security testing. In the meantime, there is no single reference to define these terms, so here are a few examples:
Full databases are slower than empty ones. Long lists of fields for products, company names and so on take longer to transmit than short lists (especially over a slow modem link).
Production applications often need housekeeping routines to keep the database growth in check. All these issues need to be tested.
Concentrates on the response times of the system under typical conditions as the load varies. This includes page download times and end-to-end timings where the Web server relies on additional systems (say a credit card authorisation service). Load testing allows current behaviour to be compared to the ideal, where the load increases as the throughput increases linearly.
As systems are never 'ideal' it is more realistic to ensure that the system works without error as the load increases and to keep throughput as close to the ideal as possible.
How well does the system cope when it is hit with a heavy load, and how quickly does it recover? Does it lose any resources or generate errors in the process? Often a burst of requests will end up queuing on the server; how quickly will the server clear the backlog?
How does a site cope with brutal attacks? Is the behaviour acceptable when the site is at the limit of its capacity? Does it lose orders? How can the application design be changed to improve the behaviour of the application under these conditions? Stress testing is everything outside 'normal operating parameters'.
Inevitably, traffic loads are able to exceed the capacity of a single computer. Mainframe companies used to add CPUs, memory and disks to the computer and Minis take a similar approach.
Intel PCs tend to have limited expansion capabilities, so people have tried to share the load by running PCs in parallel. The relatively low cost of Intel-based PCs has contributed to the massive growth of Web farms.
Enabling a Web application to run across a Web farm provides resilience where the loss of a server doesn't stop the service. This is useful for both scheduled and unscheduled outages.
Applications can be developed to run in parallel, or a sub-set of functionality may be mig-rated to dedicated servers. In the Web world, one of the first optimisations was to serve static files, such as images from dedicated servers.
Security, particularly for websites, often has a dramatic effect on the performance and capacity of the Web server. Features such as secure connections via SSL (secure sockets layer) increase network traffic and the load on the processor/s to such an extent that responses take three times as long and each server is limited to about a third of the original number of simultaneous users.
Use of 'hardware accelerators' can restore the performance of a secure site to that of the original, non secure site. Regardless of the test you are using, try to ensure that you develop tests that are simple, repeatable, and sufficiently flexible to cope with the dynamic nature of the website.
The test environment needs to be consistent and external factors such as other load on the servers will also need to be factored in. Ensure that the test processes are well documented so that others can repeat the tests reliably and consistently in future.
Don't forget that testing is only a means to an end. While it helps to identify problems, the underlying objective is to improve performance. The test scripts, techniques and processes will need to be reviewed and refined in order to improve the quality, efficiency and effectiveness of the testing.
Remember that things will go wrong. It is important to determine how well you have prepared for failures and whether you can isolate and fix the problem, and the underlying cause, quickly and effectively. Ensure that whoever is responsible for problems has the authority to deal with the problem and to enlist the support of the various IT teams, otherwise prepare for a long night and lots of heated discussions, while the problem remains rampant.
Finally, cheer up. If the e-business system is worth using, it's worth supporting, and you can take pride in delivering a fast, reliable service for your customers and clients.
Julian Harty is managing director of e-business infrastructure specialist Commercetest.