Maksim Kabakou - Fotolia
Today’s fast-paced and connected world demands that corporate network boundaries become increasingly porous. While this is essential to keep the wheels of commerce turning, it also creates headaches for those responsible for ensuring that network traffic is non-malicious.
A lack of network perimeters therefore needs to be matched with technology that can prevent damage. As with most security principles, there are two distinct aspects to consider: prevention of the threat, and minimisation of the impact, should the threat actually materialise.
The design and behaviour of the network infrastructure should aim to prevent IT assets being exposed to threats such as rogue code. In allowing the network to understand, via protocols, what it should look like, software-defined networks allow people to perform the legitimate tasks that are required by their role.
Containerisation meanwhile offers controls that minimise the impact of any disruption by preventing the malicious code from spreading unhindered across the entire network. By restricting applications to specific “contained” areas of the network and stopping them communicate outside this area, the risk of contamination is limited to this smaller segment of the network.
This also makes it much easier to have a multi-layered approach to securing the network. For example, applications that are used only for highly confidential activities could be placed into a container with a very strict security policy.
Access points can also be restricted, so that this network is accessible only through known devices and from specific locations/static IP addresses with additional virus scanning and encryption software deployed. In contrast, a time and expenses application may be accessible from any device, any location, from any access point simply with an approved user name and password.
The technologies are both a way of creating self-contained, autonomous “nodes” within a network that keep specific data and applications separate and prevent unwanted traffic, such as rogue codes, running freely across the enterprise system.
Encrypting data adds an additional layer of protection; covering the whole network, it prevents sanctioned traffic from being compromised by any bad actors that have gained entry to the system.
Defining and designing a network using containerisation, ring-fencing and encryption protocols leads to a strong preventative control environment. This, if you like, is the “theoretical wonderland”. Real-life, of course, doesn’t make it that straightforward.
Running applications in different nodes is fine if they are logically separate, such as a specific database of recipes or bill of materials (BOM) for a product. Enterprise resource planning (ERP) however doesn’t work on this basis, requiring as it does a system that integrates coherently. Putting an extensive number of applications in one container can be an option, but seriously compromises business processes if the “mix” is wrong.
For example, car production requires thousands of component parts, all of which may be needed, depending on the planned volume of units to be built. If the inventory management systems and the production planning systems are all segregated without sharing information, it becomes very difficult to know the quantity of a particular component required to meet the demands of the production.
This is exacerbated if many of the components are shipped on demand from third-party suppliers. To make this “container” effective, all the inventory, production planning and sourcing systems need to be connected via the network, which must also be open to certain external parties to satisfy the requirements for sourcing those additional components. If the same production process was used to manufacture more sensitive items, such as military equipment, then this level of inherent connectivity may not be deemed appropriate.
The real world also makes it almost a certainty that the network will be compromised. This makes it key to know what “good” looks like. Is a noticeable change a bona fide business activity – a partner changing its billing details, for example – or does it signify a threat to the network? And that takes us back to the consistent message that monitoring is necessary to manage attacks.
Monitor to manage attacks
As well as deploying preventative measures, organisations need to plan for being hacked, putting in place systems that monitor for attacks. This means that, once identified, a breach can be shut down as quickly as possible, minimising the impact on the enterprise in terms of compromised data, IT infrastructure impairment, financial losses and reputational damage.
Software-defined networks, containerisation and encryption are all powerful tools in the endless pursuit of keeping the bad guys at bay. However, they are only part of the solution. Good business practice demands that robust IT security is a many-sided operation that draws on a range of technologies, knowledge and skill-sets.