Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
While version 1.5 of the guide was released in August 2010, no firm deadline has been set for meeting the GPG13 requirements. However, the guide forms part of the Code of Connection (CoCo), a standard that organisations need to meet in order to gain access to the Government Secure Extranet, the government's secure transaction and file-sharing system . Also, with the Information Commissioner's Office handing out larger fines for data breaches, compliance with GPG13 -- and CoCo compliance in general -- can't be approached as just a tick-box exercise.
Although it is a sizeable document, GPG13 is full of advice and builds on existing HMG regulations and policies that already mandate monitoring and auditing of system events. The aim of GPG13 is to ensure system logs -- and logs from devices such as firewalls and intrusion detection systems -- are used effectively to provide alerts and forensic evidence in the event of a security breach.
Basically, improving upon the Check and Act phases of the Plan-Do-Check-Act security lifecycle, the guide strengthens an important layer in a network's defences: detecting, investigating and responding to unusual activity to reduce the gap between detection and a breach of security.
Compliance with GPG13 requires a protective monitoring policy for each ICT system to ensure system administrators know exactly what's happening on their networks and are alerted in real time if anything suspicious occurs. This policy should be based on a risk assessment to ensure the level of monitoring is appropriate and justifiable.
GPG13 splits protective monitoring into three core processes: accounting, monitoring and auditing. These need to be formalised in the policy along with management roles, responsibilities and procedures.
Accounting is the collecting and recording of event data to support the reports and alerts deemed necessary by the risk assessment. The GPG13 appendix catalogues the types of data that can be collected to support the 12 protective monitoring controls (PMCs) and the potential sources from which it can be gathered.
The PMCs provide coverage of all technical compromise methods to which a system may be vulnerable. There is also a useful applicability matrix of compromise methods covered by each PMC, and a detailed definition of each control. To meet the baseline control set requires implementing the following PMCs:
- PMC1: Accurate time in logs.
- PMC4: Recording on internal workstation, server or device status.
- PMC7: Recording on session activity by user and workstation.
- PMC9: Alerting critical events.
- PMC10: Reporting on the status of the audit system.
- PMC1: Production of sanitised and statistical management reports.
- PMC12: Providing a legal framework for protective monitoring activities.
Gathering and analysing log data from a range of different devices and applications can be a huge task. Comprehensive logging creates vast amounts of data, which needs to be securely stored with strict access control and handling procedures, because log files will undoubtedly include sensitive data. Adding real-time analysis to reams of data means additional hardware and human resources will be needed. Retention and disposal procedures will also need to be put in place. Network capacity may need to be increased so logging activities do not impair overall system performance.
Realistically, protective monitoring can only be efficiently and effectively achieved by using automated tools. Tools from vendors such as LogRhythm Inc. and Tripwire Inc., as well as the open source tools SNARE and OpenXDAS, can handle diverse data sets produced by a variety of devices, database and applications into consistent and usable reports. Centralising log management greatly improves event correlation, trending and analysis, while allowing for a consolidated dashboard and reports. Alerts are also more easily prioritised for investigation and response.
With budgets being tight, any protective monitoring technology should not be over-engineered, but should match the organisation's level of risk, and the appendix in GPG13 does help explain what is and isn't necessary, given a certain level of risk. Open source tools, such as the network intrusion prevention and detection system Snort, greatly reduce the technology costs of compliance, while certain network blind spots, such as data leakage via thumb drives, are getting a lot easier to control by default with Windows BitLocker data encryption.
While relying on a purely automated system of monitoring will enable you to tick a box, it can give a false sense of security when it is just installed, turned on and forgotten. Effective security will require active reviews of logs and alerts. This means one of the biggest costs to implementing GPG13 will probably be human resources.
Staff needs to be trained to get the most out of any product, as well as have the time to monitor alerts, possibly on a 24x7 basis, depending on the size of your organisation and the sensitivity of your data . For protective monitoring to really be effective, it needs to be conducted within a management framework that actively reviews and acts upon the outputs. Senior management will need to regularly review the overall performance of the protective monitoring and incident management functions. Any information security incidents that are detected must not only be dealt with, but the information security staff must take steps to prevent a reoccurrence.
A GPG13 implementation project, even on small, enclosed networks, may seem like overkill to many network and security administrators who are already burdened by requirements of the Data Handling Review. But protective monitoring does play an important part in a network's defences by detecting new and emerging threats and providing an audit trail of security-relevant events. GPG13 aligns closely with ISO 27001 (particularly clauses 10.10 Monitoring, 15.3 Information systems audit considerations, and parts of 13.2 Management of information security incidents and improvements), so it can be seen as guidance that is applicable to any organisation wanting to improve its network monitoring and auditing processes based on best practices. And, in the end, adding a security monitoring policy, controls and personnel is a lot less costly than a fine from the ICO, or the reputational damage that results from a data breach.
About the author:
Michael Cobb CISSP-ISSAP, CLAS, is a renowned security author with more than 15 years of experience in the IT industry. He is the founder and managing director of Cobweb Applications, a consultancy that provides data security services delivering ISO 27001 solutions. He co-authored the book IIS Security and has written numerous technical articles for leading IT publications.
This was first published in December 2010