alphaspirit - Fotolia

Trusty methods to keep out intruders

Today’s hyper-connected world means trust can no longer be taken for granted to keep networks secure – enter the zero-trust model

In the past, IT security was focused on protecting the corporate network by securing the perimeter. If the perimeter is secure, then only legitimate users, such as employees, have access to the services and applications available within it.

After all, these people were vetted when they joined the organisation; they are responsible individuals and would never deliberately reveal their passwords, access data that is supposed to be off limits or steal company secrets. But the hyper-connected world of today means trust can no longer be taken for granted.

According to Tim Holmes, CEO at 2-sec security consultancy, zero trust sums up what should and should not be done when securing IT. “Entrusting that all your users, bring-your-own devices and systems on the network won’t hack you is a mantra that is still being played back to me today by companies that should know better, with more than enough budget, resource and common sense to sort this all out,” he says.

Holmes recommends that IT security professionals consider zero trust as a means to achieve security that is designed to be infallible. According to Holmes, the whole point of a penetration test is to gain unauthorised access to systems, so part of the test will usually involve spoofing a system and/or pretending to be someone else. This tests what is exposed when there is authorised access. “If I can be that system or that person, then I’m trusted to do what that system or person is expected to do,” says Holmes.

Networks should be built on the principle that all machines are publicly exposed on the internet, he says. “You’ll soon get the idea of what needs doing. Intra-system communication should be encrypted, just as your connection to [Microsoft] Office 365 or other cloud services is encrypted. That would stop me dead in my tracks, as network sniffing would just spew out garbage,” adds Holmes.

While a hacker or penetration tester may still try to spoof people, applications and systems, if multifactor authentication (MFA) is enabled on internal systems, then, he says, there is no chance of such attacks happening unless the MFA key fobs or public key infrastructure (PKI) are somehow compromised.

Zero trust requires strict verification of all systems and people interacting with the ecosystem. One of the pillars of this approach is the concept of “least privilege”. Cyber security industry veteran Eoin Keary says this implies that IT security is implemented so that it only provides access to the system needed.

“The result of this approach is segmentation of the network and systems within the perimeter – also called micro-segmentation – whereby the network is broken into zones, each of which requires authorisation to access and utilise,” he says. “It is fundamentally a ‘trust but verify’ approach to security.”

Among the technologies IT buyers are likely to come across as they start to assess zero trust is privileged access management (PAM). Keary says this refers to a class of products that help secure, control, manage and monitor privileged access to critical assets, which is a core component to the zero-trust model. But he warns: “Least privilege is only the starting point of zero trust. Many systems and architectures have not taken into account least privilege or PAM, and to retrofit such a model would be a significant project.”

This has led to a slower take-up of zero trust by firms trying to build it into existing IT infrastructure. Keary says organisations have most success when they start with a clean sheet on a greenfield site.

Jason Revill, who leads the security practice for UK and Ireland at Avanade, says: “With all of my UK clients, I don’t see anyone raising zero trust as a priority.” In Revill’s experience, many organisations have flat networks that are complex to segment.

Historically, IT infrastructure was built up from the  deployment of peer-to-peer (P2P) and distributed systems. Basically, these are not architected to fulfil the micro-segmentation requirements of zero trust. P2P models include the Windows operating systems and wireless mesh networks. “P2P breaks the zero-trust model as systems communicate in a decentralised manner, which breaks the micro-segmentation model,” says Keary. Peer-to-peer systems also share data with little or no verification, which means they break the least privilege model. “If your architecture or processes support shared access accounts, implementing zero trust would be difficult,” warns Keary.

Such legacy infrastructure leads to technical debt and, as Revill points out: “The business case to change to a zero-trust model is very expensive.” Rather than re-engineering the legacy infrastructure to provide zero trust, Revill urges IT decision-makers to consider deploying applications in the cloud to improve security. “By being in the cloud, the network is segmented,” he says.

Revill says single sign-on using Azure Active Directory can then be used to control access to these cloud applications. Although there has not been much customer demand for zero trust, Revill says: “Customers who move to Azure Active Directory for single sign-on achieve zero trust. They are able to control access to resources on trusted and managed devices.”

Clearly, this is easy if firms use SaaS applications from Microsoft or applications that integrate well with Azure Active Directory. For example, Avanade’s own staff use cloud applications such as PowerBI and Office 365, which are integrated via Azure Active Directory. But, says Revill, it is far harder to re-engineer an application that uses lightweight directory access protocol (LDAP).

In an article on the Microsoft website describing the company’s zero-trust journey, it says it started by implementing two-factor authentication (2FA) via smartcards for all users to access the corporate network remotely. This evolved to phone-based 2FA and later, the Azure Authenticator 2FA app. In the article, Microsoft states its ambition to deploy biometrics for user authentication: “As we move forward, the largest and most strategic effort presently under way is eliminating passwords in favour of biometric authentication through services like Windows Hello for Business.”

The next phase of Microsoft’s own deployment involved device enrolment. “We started by requiring that devices be managed (enrolled in device management via cloud management or classic on-premise management),” it explains. “Next, we required devices to be healthy in order to access major productivity applications such as Exchange, SharePoint and Teams.”

Like many organisations, Microsoft’s transition to zero trust is very much a multi-step journey. It says it is working to make primary services and applications that users require reachable from the internet. This means there will need to be a transition from legacy corporate network access to internet-first access where virtual private networks (VPNs) will be used where needed. Finally, it wants to reduce its reliance on VPNs, which it says will reduce users accessing the corporate network for most scenarios.

Recognising there will be instances of contractors, suppliers or guest users requiring access from unmanaged devices, Microsoft plans to establish a set of managed virtualised services that make applications or a full Windows desktop environment available.

Google’s BeyondCorp

Last year, the Google security blog provided an update to the company’s five-year roll-out of BeyondCorp, its zero-trust security model. In the update, Google programme manager Lior Tishbi, product manager Puneet Goel and engineering manager Justin McWilliams write: “Our mission was to have every Google employee work successfully from untrusted networks on a variety of devices without using a client-side VPN.”

The trio outlined three core principles that make up BeyondCorp. First, the network does not determine which services a user has access to. Second, access to services is granted based on what the infrastructure knows about the user and their device. And third, in BeyondCorp, all access to services must be authenticated, authorised and encrypted for every request.

Five years on, and the blog post recognises the importance of executive buy-in when implementing any zero trust. In the blog post, Tishbi, Goel and McWilliams also stress the importance of accurate data. “Access decisions depend on the quality of your input data,” they write. “More specifically, it depends on trust analysis, which requires a combination of employee and device data. If this data is unreliable, the result will be incorrect access decisions, suboptimal user experiences and, in the worst case, an increase in system vulnerability, so the stakes are definitely high.”

Conditional access

Making zero trust integrate seamlessly into the way users work is something the experts Computer Weekly spoke to believe is vital to success. One approach to improving the zero-trust user experience is conditional access, which Avanade’s Revill says can offer a way to make security visible only when it is needed.

But a centrally managed zero-trust framework goes against some of the flexibility and agility that firms are striving to achieve through digital transformation, according to Keary. “Implementing zero trust in a DevOps environment needs additional technology and impact processes to segment and enforce this paradigm given that such an environment is very dynamic,” he says. “Applying zero trust in a DevOps environment without some form of automation and removing the manual aspect would simply not be scalable and would slow down pipeline throughput dramatically.”

Read more on Network security management

CIO
Security
Networking
Data Center
Data Management
Close