olly - Fotolia

Better user acceptance testing through log management

Application log management provides insight into application behaviour, which can be used to build better software and gain a better hold of user acceptance testing

Software has become more granular. Not just in terms of components, de-coupled services, containers and the way we separate parallel programming streams, but also in terms of process.

More specifically, the mechanics of application software development have become more individually definable and therefore increasingly visible as a result.

As we now seek to gain the advantages that can be reaped from this new proximity to the core, we will look to achieve granularity from front to back – or end to end, if you prefer. This means you can look to use application log management for insight into application behaviour as a means of building better software and gaining a better hold on user acceptance testing (UAT).

By way of definition to set out our stall, let’s define these terms we are playing with in simple business technical language.

After the generation and transmission of computer system time-stamped log events, log management is the storage, archiving, analysis and ultimate disposal of the log data itself. Logs can be attributed to access events, data transaction events or wider database-related events.

User acceptance testing, meanwhile, is the final testing that occurs after functional, system and regression testing has been undertaken. Its aim is to validate the software against the business requirements to which it was built. After UAT, if successful, the software can, in theory, go into live production.

Given the logical relationship that exists between logs at the back end and how users react to software at the front end, how do we use one to manage the other?

Logging season is open

"Theoretically, in the UAT phase of software development, the new code should run perfectly. Sadly, pure theory is rarely reflected in real applied life," says Sumo Logic co-founder and chief technology officer (CTO) Christian Beedgen. "What you can always be sure of is that there’s an error somewhere – and if you don’t find it during UAT, your users will point it out to you later when it’s in production."

Beedgen asserts that logs hold the key to this problem for development teams. He says that as a dataset, logs are an opportunity to run queries and analysis to identify errors and exceptions, as well as model behaviour and alert deviations.

"So beyond UAT, once in production, logs will continue to capture the information that enables you to conduct root cause analysis and troubleshooting, so you can remediate issues that you find. For the testing, development and production environments, collecting and analysing logs will support your mission to find and eliminate anything standing in the way of uptime and quality user experience," says Beedgen.

What are we looking for?

But what can we learn from logs and how should they connect to UAT? Essentially we are looking for any activity which we would deem to be abnormal in respect of the anticipated normal execution of the application runtime. From this we hope to be able to deduce which elements of the software development lifecycle need to be escalated as problems that need to
be resolved.

But we need to be careful here; one could say that user acceptance testing and log management are concepts and worlds that are by nature always far apart, argues Balázs Scheidler, CTO of log management infrastructure company Balabit.

Why should this be so? Scheidler says that it is because system and application logs are usually managed by operations and security operations teams, to ensure the continuous operation or the continued security of an IT system. UAT on the other hand is performed by quality assurance (QA) people at the end of the deployment/delivery process.

"User acceptance testing tends to be black-box testing, using a set of predefined test scenarios or just freestyle testing without too much concern about the internal structure – or under‑the‑hood behaviour – of the application. It is simply assumed that this under‑the‑hood behaviour was properly validated during the earlier stages of the QA process using unit tests, integration tests and system tests," says Scheidler.

Wasted insight opportunity

However, this can be a great waste of insight. This is because application logs tend to include a lot of information about internal behaviour as they are the primary means of troubleshooting problems encountered in production. With a few techniques in place, log analysis can uncover the rough edges of the application while performing acceptance testing.

"UAT is generally wide in scope and shallow in depth. That is to say it tries to cover most of the functionality without trying all the combinations, whereas earlier testing stages usually take care of the depth but concentrate on units or components at a time," says Scheidler.

"During UAT, the application needs to work end to end, however, in most cases, only a handful of transactions are performed. It’s often the case that the application functions properly for a few requests but a bug leaves a trap somewhere in the application’s state which can trip the next user. This is exactly what we intend to uncover before production," he says.

Weapons in the UAT arsenal

Balabit’s Scheidler lists the application awareness opportunities available to us if we include logs into our UAT arsenal. He says logs will give us the chance to:

  • Look at various known bad patterns – error, failure, warning and so on;
  • Track known good messages and find the exceptions – a technique also called artificial ignorance;
  • Use clustering and other machine learning techniques to find differences between known good data.

Putting these steps into practice is done via a process. Trevor Pott is an IT consultant and network administrator in Edmonton, Alberta, Canada. He stipulates that we need to correlate user activities with actual events, but we must remember that access logs and error logs are usually separate. If we can narrow down an individual user’s access and timeframe, we can see what might have caused their errors.

Pott urges us to remember that system administrators are users too. Logs need to be usable, he says. "They can’t just spew forth so much white noise that nobody ever checks them. Thus services such as BigPanda, which scans alert emails to make sure that you are paying attention to the logs you actually need, are an important part of UAT as well."

It is a bigger gameplay all round if we do address logs in this way. This means UAT logging should be pretty intensive – more so than the sorts of logs we might collect during regular operation.

"If companies are using a single logging server infrastructure for UAT, testing, development and production, log management will be required to separate one class of logging – and the insane levels of alerts it will generate – from the other," says Pott.

Read more on log management

We look at how companies analyse server and security logs to tackle cybercrime and internal fraud, and optimise the user experience

How can log management be used to bolster information security and improve incident response without infringing user privacy?

Virtuous circle

So what happens if you get the connection from log management to UAT right?

The log management-UAT state of nirvana brings us to a place where we achieve wider overall domain knowledge in terms of the entire scope and breadth of the software we are trying to develop and deploy. This in turn leads us into a virtuous circle of intelligence where we can potentially learn more about the actual functionality that manifests itself in the software that has
been produced.

Onward from here, we get less errors in live production, better performing software and happier users with fewer risks associated with security, compliance, governance and licensing.

The concepts here are simple, or at least logical enough, in terms of the way they look on paper. Working these conceptual methods through into live application production takes a little more blood, sweat and tears. Early signs are that it is worth pushing through the pain barrier.

This was last published in July 2015

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Software development tools

Join the conversation

4 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

It's always helpful to know in advance what are the user patterns associated with product's use. It helps testers build 'persona' related tests as well as to figure out and prioritize things/features that matter. 

One other key benefit that I see of this technique is that it can help testers do impact analysis that might benefit business. For example, if he change being made is impacting most used feature of the product, testers can analyse the impact and give that information to stakeholders so that they take informed decision. 
Cancel
We use our application logs for research and troubleshooting constantly. They are a huge source of information for us. However, we've struggled to strike a balance between too little data and too much data. Too much data can also cause space issues, and it got to the point that we needed to automate a solution to delete all of our old log files on a regular basis.
Cancel
We use Stackify's (http://stackify.com) log management and I agree with what said here, strong log management solution allow you to find issues faster as it allows us to track rates of errors, or specific logs which allows us to put the developers attention to the more important customers
Cancel
There’s always a difference between how we think the users use the system and how they actually use it. Anything that we can do to bring our understanding of how they use the system should be used to adjust acceptance tests.
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close