In my last blog, I highlighted the ongoing debate within IT security that is zero trust or, to use the full acronym, ZTNA (Zero Trust Network Access) and that – in theory, at least – it is kind of all or nothing, like a fundamental firewall – i.e., allow or deny all.
In the real user world, obviously that is somewhat, er, limiting – kind of like babysitting toddlers, who have the ultimate “deny all” mechanism in the form of the word: “shan’t”. I also mentioned referencing vendors who are actually trying to make some sense of ZTNA and one example is Cato Networks, which has just (paint still drying) added what it defines as “device context” to its SASE offering. The idea is straightforward and logical – every user is different and has a different associated risk profile. Therefore, allowing SecOps to set policies that factor in a user’s full context for data and application access (the zero-trust element) and additionally – and this is the clever bit – the actual capabilities within an application.
First and foremost – user devices are often seen as the easiest “back doors” to enter in a cyber-attack. After all, why would a non-techy user even want to understand the mechanics of cyber security? Not only does it impact on their day-to-day workings but also on their lunch break – “Back Door Device Access For Dummies”? But it is both the obvious point to secure from a ZTNA perspective AND the point at which application access needs to be properly defined and controlled. Access based purely on user ID is a 90s concept – has no one heard of identity spoofing; or device spoofing?
Or – and this is a classic – devices that are not configured to IT defined standards. Hello back door…
This refocusing on what is effectively risk-based application access control is a far more realistic approach to locking down network access. After all, if we don’t allow for flexibility within a modern IT infrastructure, in order to maximise the access and application opps that now exist, then we might as well simply revert to the old-school mainframe methodology. Secure? Yes. Limited? Very. Cato’s view is that defined policies will allow companies to embrace the full user context, so adding control not simply to application access, but what capabilities and features within those applications can be accessed. This, for me, is where it is taking a true next step. Of course, policy control in networking has been around – and largely ignored – for decades, but taking it to intra-application level makes a lot more sense than the basic allow-deny strategy. And, naturally, the concept extends to wherever those users wander – internal, Internet, cloud…
So, how is this working in practice? What Cato is doing is embedding continuous device context assessment throughout its software stack, meaning it will continuously assess the posture of a user’s device, including automatically taking remedial action if that device becomes non-compliant. As part of this overall assessment, the platform already analyses the fundamentals such as identity, the network, data etc, so it’s no one-trick pony. It means that the user controls are not tied to a specific device; for example, when using their own device, they might have very different permissions to if they are using a company-provided endpoint device. This adds extra flexibility the work from office and/or home scenario too, especially since it also covers geo-location so, if the user is trying to access the network from an untrusted location, it can simply block access.
From a productivity perspective, it means that the secure versus allowing the user to do their job conundrum is being addressed in two ways: from a ZTNA perspective it means a user can be restricted to specific, trusted resources. But, looking from a broader perspective, such as CASB – which we covered in a recent blog – the ability to use device context at an application level, means that they can be working from anywhere, on any application located anywhere, and the controls still apply. And it’s totally scalable. Nice 😊