melita - stock.adobe.com

Why transparency and accountability are important in cyber security

If we accept that the humans who build technology and systems are naturally fallible and mistakes inevitable, and then deal with that with good grace, we could do much to improve cyber standards, writes Bugcrowd's Casey Ellis

Let me introduce you – if you’re not already familiar with them – to two individuals who never met, and whose lifetimes didn’t even overlap: Auguste Kerckhoffs and Claude Shannon.

Auguste Kerckhoffs was a 19th century Dutch-born cryptographer. In what became his eponymously titled Principle, he asserted that any cryptosystem should be secure, even if everything about the system, except the key, is public knowledge. Over 50 years later, the American mathematician Claude Shannon rephrased Kerckhoffs’ Principle into what is known as Shannon’s Maxim, stating that "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them".

If you want to be particularly succinct, think of it as: “The enemy knows the system”.

Both Kerckhoffs and Shannon were pioneers of “disclosure thinking”. They both understood that any system needed to be constructed under the assumption that it had already been broken. Such thinking is still widely adopted by cryptographers today.

The renowned security technologist Bruce Schneier has suggested that both Kerckhoffs’ Principle and Shannon’s Maxim apply beyond cryptography to security systems in general. Schneier said: “Every secret creates a potential failure point. Secrecy, in other words, is a prime cause of brittleness – and therefore something likely to make a system prone to catastrophic collapse. Conversely, openness provides ductility.”

It’s a view of cyber security that I strongly share. Back in 2020, a question on then-Twitter appeared: “What is one piece of advice you’d give to people in security?” My answer was: “corporately accepting that the humans build your software and systems, while awesome, are also fallible. the aim of the game is to find them, fix them, learn from them, and repeat.  life is learning :)” [sic].

Why this answer? Because, as with cryptography, we expect very little (or no) variability in the performance of applications and systems as they go about their intended task: they operate according to defined algorithms built on mathematical systems and logic. But the humans who create tasks and operate systems… we, however, are a different story. To illustrate the point in one of my favourite ways: the keyboard I’m using to type this article is operating perfectly, but I’ve hit backspace or delete a hundred times already.

Kerckhoffs’ Principle summarises how we need to think about the intersection of security and design thinking – that is to say, the point at which attackers, users and the system interface. When human-induced errors are overlaid onto a system designed to do exactly what it’s told, it creates bugs… and, sometimes, vulnerabilities.

I am convinced that humans – whether the contributors themselves, management, the organisation, and the market – are responsible to a significant degree for the current state of security. We neglect to recognise that people are absolutely not machines: we praise the good stuff but decline to address the negative stuff. And we hope – hope! – that, by ignoring the ugly, it will eventually disappear.

This is a mindset that needs to change. To adopt Schneier’s terminology: we will all be less brittle if:

  1. We recognise that mistakes are inevitable, and;
  2. When a mistake is identified, we extend grace – provided the mistake is acknowledged, dealt with, and lessons both learned and applied to prevent any repeat of the same failure.

In other words: we adopt the dual policies of transparency and accountability.

Transparency and accountability lead to increased trust and improved reputations

When I say “we”, I mean “we”. At Bugcrowd we lead by example, and the values of transparency and accountability are baked into everything we do; it’s part of our corporate DNA. Trust and accuracy are vital, so our security practices are an open book, forging trust with our partners and the public. When a vulnerability is uncovered by our researchers, we aim to ensure that said vulnerability is nullified by its developer or publisher to prevent the same vulnerability continuing to exist elsewhere.

Organizations that work with Bugcrowd to implement bug bounty or vulnerability disclosure programs demonstrate the same transparency-and-accountability mindset. Their starting point is to assume that mistakes may have been made. They accept accountability for uncovering any resulting vulnerabilities they may have and take transparent steps to address the possibility. It is much cheaper and simpler than finding themselves accountable for the consequences of vulnerability exploitation.

But transparency and accountability are not, of course, outcomes; they describe how to do things. The outcomes of doing things in a transparent and accountable way - and another reason they are important in cyber security - are increased trust and improved reputations. For example, we see that organisations are increasingly engaging with Bugcrowd to proactively safeguard their brand and intellectual property and are communicating, to their own customers, the measures they are taking. Their objective is to grow public confidence by positioning themselves publicly as fearless defenders of their customers’ safety.

In another example, Bugcrowd was recently involved in an event organised by the Election Security Research Forum (ESRF), as part of that organisation’s aim “to help shape a clear and concerted approach to establishing processes by which security researchers and US election technology providers can work together under principles of Coordinated Vulnerability Disclosure (CVD) to enhance the security of elections technology and increase overall confidence in US elections.”

To me, the value of that event was something more important than “we tested election equipment”. To me, the key value was that, in the context of a highly emotive and important subject, everyone involved promoted transparency: we tested very publicly, jointly, and declared that we were doing so. The result of such an activity must, inevitably, be increased trust in the entirety of the process. (And, had any vulnerability been found, it would have been declared and addressed; accountability and transparency in action.)

The accountability and transparency mindset is certainly gaining traction across cyber security. But it's worth acknowledging that it can be difficult to prove, to the C-suite, the return on an investment intended to prevent occurrence: if something never happens, who’s to know that it would have not happened anyway? Such an argument quickly runs out of steam once the C-suite is reminded to focus on the outcomes rather than the process: most organisations invest heavily across their entire operation to cultivate trust and their reputation, precisely because they are both such highly-prized attributes.

Highly prized at least in peacetime, at least – but both Kerckhoffs and Shannon had rather different agendas in mind when they were coming up with their Principle and Maxim respectively. I like to believe they’d be delighted to see their critical thinking applied to more positive effect, so many years later, in a very different world.

Casey Ellis is founder and chief strategy officer at Bugcrowd

Read more on Business continuity planning

CIO
Security
Networking
Data Center
Data Management
Close