Maksim Kabakou - Fotolia
Over many years, myself and my associates have advocated “security in-depth”. To solely rely on perimeter security, in our book, was never enough.
Security in-depth aims to protect not just data, be it at rest or traversing a network, but all the assets within a network, such as Ethernet switches, servers, routers and so on. This is done by implementing multiple layers of controls and defences throughout an IT infrastructure. The choice of controls used and deployed defences will depend on the technologies employed and values of the assets to be protected.
In the following paragraphs I’ll go through a typical scenario that I believe an infosec professional should be an advocate for and be building strategies to achieve.
Strategies to achieve a good sustainable level of defence in-depth that will in turn provide better protection of data from both internal and external threats will depend on a number of variables.
Those variables will include such items as the business’s risk appetite, budget availability and budget cycle, the attitude of the company’s board and senior managers towards IT and IT security, the age and architecture of the company’s IT estate including whether there is an element of bring-your-own-device (BYOD), internet of things (IoT) device usage or visitor Wi-Fi access and, of course, there is the value of and location of company data.
Start with the basics
The starting point for good defence in-depth – and no apologies for restating it – is addressing the basics. This includes ensuring that:
- The IT estate is up to date with software and firmware patches.
- All default passwords have been changed.
- IT administrators and technicians have two accounts, one for day-to-day (email, report writing, and so on) and one for working on the IT estate.
- Only IT administrators and technicians have administrator privileges in the live network (users must not be given administrator access, even to their own company-provided PC).
- Good password policies are enforced, together with user access privileges and function (for example, sales should not be able to access HR files and people who only need to read files are restricted to read only).
- Unused accounts are regularly decommissioned or removed from the access control system.
- The IT estate as a whole is regularly backed up and there are easy-to-access policies, standards, procedures and work guides which are maintained and used.
Moving on to protecting the IT infrastructure itself, we would still look to deploy some form of border security to provide an initial protection of the infrastructure from other networks, such as the internet or a partner company’s network.
Border security could take the form of traditional firewalls with a demilitarised zone (DMZ) where proxy servers and the like provide separate interfaces to the outside and internal world – mail transfer agents, virus and spam scanning services, web servers, web proxies and reverse web proxies, and so on.
The essence of this border security is to ensure that all traffic to and from the external networks and to and from the internal networks originates from and terminates on one of the DMZ-mounted services. User authentication would typically be handled by DMZ services as a proxy for internal authentication services such as Microsoft Active Directory.
Within the network itself, virtual LAN (VLAN) technology, “on-device” firewalls and multiple segregated networks where each network is firewalled from other networks continues the defence in-depth strategy.
Here, critical servers such as databases would be placed on a dedicated network separated from the network(s) that users are connected to. Users accessing from external locations and on-premise Wi-Fi should be terminated on their own networks, possibly with a DMZ and proxy devices providing authentication, antivirus and web proxy services to decouple direct access to internal facilities.
This decoupling is important where BYOD or unknown non-company devices need to access internal systems, such as email or the company intranet. Storage area networks (SANs) and network-attached storage (NAS) should also be on their own dedicated networks. All servers and user devices should, in addition, have their own individual firewall and antivirus functionality enabled where it is available.
At the application level – and this is where data leakage is most likely to occur – coding and testing applications against OWASP should be standard practice.
Strong consideration should also be given to the deployment of application-level firewalls. Software source code is often rich in comments and metadata. While this is recommended practice, comments and metadata included in HTML code might reveal useful information to a hacker. A critical review of comments and metadata should be carried out to see whether there is the potential for data leakage.
Looking at data at rest, databases often provide the ability to encrypt data fields, and these features should certainly be utilised where personal and sensitive company information is being stored. Bulk encryption of data should be employed where available be it on a user’s PC or laptop or a file server or system.
Data on the move can be protected using encryption technologies such as IPSec encrypted tunnels over LANs to keep data paths segregated, HTTPS for web use, virtual private networks (VPNs) for mobile and remote users accessing over the internet. The use of X.509 certificates can provide mediated mutual authentication (of a VPN tunnel) as well as encryption.
The above covers some off the things that need to be covered in a strategy to improve the protection of a company’s data – it is not exhaustive. Much of today’s software products come with a range of security features built in. My advice is simple: know the packages and use their features. This could save (a lot of) money because you are not buying the latest whizz-bang appliances which might only duplicate what you have.