Maksim Kabakou - Fotolia

Security Think Tank: Smart log monitoring and analysis key to security success

How can log management be used to bolster information security and improve incident response without infringing user privacy?

User privacy in the context of a company and its various systems is an often misunderstood area, and falls fairly and squarely on company governance. If, through a company’s acceptable user policy, people are allowed to use IT systems for private purposes – email or word processing, for example – then those private documents must be kept separate and must not be viewable by other company staff. A tall order generally fixed by a catch-all phrase in a person’s contract of employment stating that any staff communication is liable to be monitored.  

However, that then means if a private document or communication gets leaked, you have a data protection issue. So how do you monitor staff use of IT without raising the spectre of an Information Commissioner's Office (ICO) investigation?

The basic answer is to maintain good log and audit files, and have the smarts in place to undertake analysis and report generation. Those smarts of course can cost money and/or resources to implement. While Microsoft provides tools such as Log Parser, IIS log analysis, a performance analysis of logs (PAL) tool and a Microsoft Speech Server content extract log analysis tool, there are a number of third-party products available from companies such as LogRythm and SolarWinds, plus a number of open-source products and their commercial derivatives – Nagios and Splunk, for instance.

As with any tool, it is in the set-up and configuration that success or failure is born. Reports need to be meaningful and generated on a regular but not too infrequent – or frequent – basis. For example, inactive accounts can be reported monthly but the top 10 users should be reported weekly. 

Strange or anomalous behaviour should be defined and issued as an immediate alert. A few suggested anomalous behaviours are modification or deletion of a system file, detection of malware in a server, unexpected export of a file, heavy use of resource such as CPU or internet bandwidth, multiple user authentication failures and systems unexpectedly going offline.

Retention time for log/audit information needs careful consideration as there can be large volumes of data generated and for a typical business a retention time of around three to six months is recommended. While it may seem obvious, all parts of an IT system need to have their clocks synchronised to a single central source and that synchronised to an external clock that is automatically synchronised by a time code transmitted by a radio transmitter connected to a time standard, such as an atomic clock.

Having good-quality alerts issued to an appropriate set of personnel will greatly assist in incident response as will the availability of time-stamped log and audit reports. Holding log and audit files in a dedicated log server(s), rather than on each server or device, will assist in any investigation. Also, making the log server write-only will protect the files from unauthorised access and modification, and will allow forensic investigations to take place. Good-quality logs, analysis and reporting will feed into the process of improving security post-incident.


Peter Wenham is a committee member of the BCS Security Forum strategic panel and director of information assurance consultancy Trusted Management.

Read more on Privacy and data protection

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close