Maksim Kabakou - Fotolia

Security Think Tank: Responsible vulnerability disclosure is a joint effort

By working hand-in-hand, developers and security researchers can both play a vital role in ensuring newly-discovered vulnerabilities are addressed appropriately, writes Paddy Francis of Airbus CyberSecurity

We all know the importance of identifying and managing vulnerabilities in our systems, as well as patching them as soon as we can, taking into account the need to test critical system patches before full deployment.

However, the generation of patches and prioritisation of vulnerabilities to be addressed is underpinned by responsible disclosure and management of those vulnerabilities, including the provision of information about the vulnerability.

Vulnerability researchers are an important part of this ecosystem, and software developers should encourage and reward disclosure. Most will therefore have a clear published vulnerability disclosure process, as set out in ISO/IEC 29147:2018, to be used by vulnerability researchers and others who identify vulnerabilities and report them to the developer.

Developers, on their side, should always acknowledge the contact quickly and tell vulnerability researchers how and in what timescale they will address the report – ultimately giving them confidence that the problem will be addressed. Developers should have the responsibility to develop and distribute a patch that eliminates the vulnerability in a timely manner (typically 90 days).

Consequently, the report should include timescales in which it will be acknowledged and addressed, as well as information on any incentive for the reporting vulnerability researcher.

Moreover, the software developer’s reporting process should be online and include a dedicated email address for reporting, along with a mechanism for encrypting the report (typically a PGP public key or equivalent).

Likewise, those reporting the vulnerability must act responsibly and not publicly disclose the vulnerability until the developer has been able to develop a patch. In most cases, if the developer has not responded and/or produced a patch within a reasonable time, the vulnerability researcher may choose to publish, but should still act responsibly and maintain a dialogue with the developer.

Nevertheless, under some jurisdictions, there are legal considerations when it comes to disclosing vulnerabilities, where following the disclosure process can protect the vulnerability researcher.

In the case of large companies with a history of updating their software promptly, there is usually a reason for the delay – and a reminder that immediate disclosure may not be the best course of action. Publicly disclosing a vulnerability is a big step for a vulnerability researcher to take while there is no patch available – and they should at least make clear their intent and give the developer a last chance to respond before disclosing.

However, if a developer is clearly dragging their feet and there is little prospect of a patch being deployed, limited disclosure may be justified. After all, if one researcher can find a vulnerability, it is only a matter of time before a malicious actor discovers and exploits it without warning. While public disclosure will allow attackers to generate exploits for the vulnerability, at least users of the software will be aware of the risk and may be able to develop mitigations. 

In some cases, typically with larger tech companies, the developer and vulnerability researcher will be part of the same organisation, but the basic process should be the same. The incentive to act however may not be as strong.

As part of the disclosure and patching process, a common vulnerability exposure (CVE) will be produced, typically initiated by the vulnerability researcher. The information contained in the CVE is an important part of managing vulnerabilities on a system.

Vulnerability management systems that scan for and report vulnerabilities rely on CVE information to detect missing patches and report the severity of an extant vulnerability. Also, where extensive testing of a patch is required, information on the vulnerability can often be used to mitigate the risk of exploitation through the use of firewall or intrusion detection system rules while the patch is tested. 

The creation of CVEs is an important part of this process, particularly for critical vulnerabilities. While the CVE is in itself a disclosure of the vulnerability, we need to remember that issuing a patch allows an attacker to reverse engineer the patch and identify both the code being replaced as well as the vulnerability being patched. This can be done in a matter of minutes and an exploit developed sometimes within hours.

Therefore, if critical vulnerabilities are patched as part of a routine software update without a CVE being issued, users will be unaware of the risk and unable to mitigate it while the patch is being tested for their environment. Also, once a patch has been issued, vulnerability researchers may feel they are able to publicise or demonstrate exploitation of the vulnerability to boost their profile. 

Ultimately, the vulnerability disclosure process cannot be legally enforced and is purely based on trust in people to do the right thing, incentivised by mutual benefit and the need to avoid the inevitable publicity when things go wrong. On the whole, responsible disclosure works pretty well, however, as with everything in life, there is always room for improvement on all sides.

Read more from Computer Weekly’s Security Think Tank about vulnerability management

Read more on Security policy and user awareness

CIO
Security
Networking
Data Center
Data Management
Close