Nmedia - Fotolia

Break free from traditional network security

Highly connected businesses put pressure on traditional network security. We look at how corporate networks are breaking out of their boundaries

This article can also be found in the Premium Editorial Download: Computer Weekly: Moving beyond network boundaries

Today, there is no longer a hard network perimeter. Businesses need to forge close links with partners, employees increasingly use their own devices for work and software platforms interconnect business ecosystems.

The traditional approach to network security, which relies on a hard border and access control to the corporate network, is unsuitable for this dynamic business environment.

From a security stance, the network is becoming perimeterless, and rather than a hard network barrier, the corporate network needs to be porous; security inside the network has to be zero-trust.

The experts Computer Weekly contacted regarding perimeterless network security generally agree that such an architecture is not easy to achieve, but software-defined networking (SDN) and containerisation offer network security architects a sound foundation on which to implement a perimeterless network security strategy.

Perimeterless networks are the future

Gareth Haken, senior analyst at the Information Security Forum (ISF), says perimeterless networks and zero-trust security represent the future for some very large organisations. Google’s BeyondCorp is a good example of such a network architecture.

“The removal of a defined network boundary poses a range of new challenges for security practitioners, particularly as business models and the threat landscape evolve,” says Haken, adding that creative thinking becomes more important as solutions to such challenges may be found in both traditional controls and new developments not intended as security solutions.

Dynamic network configurations

While it can be argued that perimeterless network security will become essential to keep the wheels of commerce turning, Simon Persin, director at Turnkey Consulting, says: “A lack of network perimeters needs to be matched with technology that can prevent damage.”

In a perimeterless network architecture, the design and behaviour of the network infrastructure should aim to prevent IT assets being exposed to threats such as rogue code. Persin says that by understanding which protocols are allowed to run on the network, an SDN can allow people to perform the legitimate tasks required by their role. Within a network architecture, a software-defined network separates the forwarding and control planes.

Paddy Francis, chief technology officer for Airbus CyberSecurity, says this means routers essentially become basic switches, forwarding network traffic in accordance with rules defined by a central controller.

What this means from a monitoring perspective, says Francis, is that packet-by-packet statistics can be sent back to the central controller from each forwarding element. “This can allow an intrusion detection system [IDS] to be applied in every forwarding element,” he says. In effect, every data flow across the network can be analysed.

This can include east-west (server-to-server) and north-south (client-to-server) traffic, which gives far more comprehensive monitoring than would usually be available in a fixed network.

Responding to demands

In effect, a software-defined network enables the organisation to respond to the demands of variable workloads using dynamic networking, separating the flow of data from the network routing instructions.

Maxine Holt, a research director at Ovum, says: “This separation allows security policies to be applied directly 

Today, there is no longer a hard network perimeter. Businesses need to forge close links with partners, employees increasingly use their own devices for work and software platforms interconnect business ecosystems.

The traditional approach to network security, which relies on a hard border and access control to the corporate network, is unsuitable for this dynamic business environment.

A final cautionary statement: Basic security hygiene should not be neglected

Employing virtual machine and/or containerisation technologies in combination with virtual LANs, with or without encryption, can only improve the security of an organisation’s IT provided that:

  • Any software and hardware (including network devices) used is within the manufacturer’s support and maintained to the current manufacturer’s software or firmware release level;
  • All security patches are applied in a timely fashion;
  • All systems, services, operating systems and network devices are properly configured (not using default out-of-the-box configurations;
  • The IT environment is subject to a full IT security health check at least annually and whenever a major change is implemented.

Source: Peter Wenham, a member of the BCS Security Community of Expertise

Containerise networked applications

Containers have provided application developers with a way to avoid the large footprint associated with running an operating system in a traditional virtual machine (VM) environment. They also increase the portability of applications.

The container runs on top of the operating system and is configured with the specific libraries and functions required to run a given application.

An application configured to run in a container is said to be containerised. Moreover, applications can also be deconstructed such that different functional components run in several containers and communicate with each other over encrypted links.

“Containers package the workload in a format that allows security policies to be applied to just a specific workload, as well as supporting the concept of portability,” says Holt. “Furthermore, containers can be encrypted, which can be used for protecting information.”

Francis says: ”While the concept of containers was not developed for security and their security features are not generally mature, they can provide isolation between applications as well as between applications and the operating system.

“Containers can also be given permissions to control their execution and be easier to update and roll back. This reduces the attack surface, because the applications only have the services they need and more fine-grained permissions are possible.”

Haken says: “Each application is self-contained, meaning rogue code should not be able to access other applications or data on the same physical or virtual machine.”

Since containers are often grouped alongside DevOps in terms of new technology trends, they work well in an agile, fast-changing IT landscape. Rogue code running in a container can quickly be quarantined.

“Should the rogue code attempt to access, exfiltrate or modify data, more traditional controls can be implemented, such as encryption, to help retain the confidentiality and integrity of information assets,” says Haken.

Network isolation and application partitioning

In the face of growing cyber security challenges and external pressures, Mary-Jo de Leeuw, director of cyber security advocacy for Europe, the Middle East and Africa at (ISC)2, says it is now very important to implement measures that deliver aspects of isolation and air-gapping to the IT estate – be that physical or in the form of software abstraction layers.

Haken says that, unlike traditional network security, the idea of a strong front door with everything behind it being secure does not apply in a perimeterless network. This means visibility and administration of internal network infrastructure is crucial.

For Vladimir Jirasek, managing director at Foresight Cyber, the paradigm of zero-trust networks, software-defined datacentres and containerisation delivers an exceptional level of security through automation, asset management, self-healing policies and application partitioning.

But, as with anything in IT and cyber security, he warns that an exceptional technology, operated by untrained and undisciplined people following not-so-well thought through and documented processes, is bound to fail.

“For companies to benefit from these advanced technology patterns, they must rethink their processes, eliminating the human element where possible,” says Jirasek. “Rethink security policies by moving more to industry standards rather than bespoke, and, most importantly, train people to use, manage and monitor new technologies.”

Modern containerisation

In containerisation, the host hardware runs a single Linux operating system (OS) that supports LXD extensions and a containserisation engine is then run on top of this Linux OS.

This engine in turn replicates the Linux OS into multiple containers and each container can then run a Linux application.

Peter Wenham, a member of the BCS Security Community of Expertise and director of information assurance consultancy Trusted Management, says that as with the more traditional hypervisor approach to virtual machines (VMs), it should be possible to interconnect the various containers via configuration of the containerisation engine and the underlying LXD-compliant Linux OS.

“The advantage of running dedicated service VMs and containerised applications is that should one application become compromised by rogue software, the underlying VM or Linux kernel and associated containerisation engine (Linus LXD) should protect the other VMs or containers,” he says.

When combined with the use of virtual local area networks (LANs), multiple networks can be created, effectively segregating data flows and therefore reducing or eliminating the potential for malicious software to gain a hold over the whole network.

Modern containerisation is capable of providing self-healing features, which means it moves from a preventative control to a corrective one, says Ramsés Gallego, a former board director of Isaca and strategist and evangelist at the office of the chief technology officer at Symantec.

If something bad happens, such as rogue code infecting the container, it can revert to its original state, he says. 

(ISC)2‘s De Leeuw adds: “The success of containerisation stands or falls based on the success of common adoption. In this way, organisations can combine their use of SDN containerisation and encryption to prevent rogue code from running freely across a corporate network in an undefended state.”

As with the more traditional hypervisor approach to VMs, it should be possible to interconnect the various containers via configuration of the containerisation engine and the underlying LXD-compliant Linux OS. The advantage of running dedicated service VMs and containerised applications is that should one application become compromised by rogue software, the underlying VM or Linux kernel and associated containerisation engine (Linus LXD) should protect the other VMs or containers.

Multitiered network design

Wenham says a multitiered network architecture could be designed with a data storage tier, an application-to-application tier, a front-end user access tier and a demilitarised zone (DMZ) tier

“The addition of encryption to the mix of VMs/containerisation and SDNs can improve the security of a data flow by preventing effective eavesdropping on a data flow,” he says.

Encrypting data stores can prevent the leakage of usable data by malicious software. However, as Weham points out, encryption is not a silver bullet. “Ransomware that encrypts files will still encrypt a file irrespective of whether it is in plain text or not.”

Read more on Containers

CIO
Security
Networking
Data Center
Data Management
Close