- Cloud-based intelligence databases
- Customised risk tolerance
- Signature approach combined with local behavioral analysis
- Protecting systems offline
- File-scanning misses the big picture
- Read more about cloud and security
Security continues to hinder organisations in adopting cloud computing, at least for mission-critical or sensitive data applications. Concerns about sensitive data sitting on infrastructure shared with competitors continue to linger, but the power of cloud computing is now being put forward as an effective way of dealing with increasingly dynamic and advanced threats.
Some security suppliers are even looking at cloud computing to give them the competitive edge in detecting and mitigating previously unknown threats in near real time.
So can cloud computing tackle new and emerging cyber threats, or is this just a new round of security industry marketing hype?
For quite some time security researchers have been saying signature-based technologies can no longer cope with the latest threats. Because attacks are so frequently updated, by the time something is recognised as a threat, a new variant has been released rendering any signature-based security systems impotent.
Research by security firm Imperva has shown that less than 5% of the top 40 anti-virus systems are able to detect previously non-catalogued viruses initially.
The research, which used more than 80 previously non-catalogued viruses, also showed many systems took up to a month or longer, following the initial scan, to update their signatures.
“Enterprise security has drawn an imaginary line with its antivirus solutions, but the reality is that every single newly created virus may subvert these solutions,” said Amichai Shulman, CTO, Imperva.
“We do not believe enterprises are achieving the value of the investment of billions of dollars in anti-virus solutions, especially when certain freeware solutions in our study outperformed paid solutions,” Shulman said.
In the light of this and other similar studies, those at the forefront of security research agree the time has come for a different approach. Organisations need to detect new threats quickly and mitigate them before too much damage is done, but is cloud computing the answer?
At the very least, Security firm Webroot believes cloud computing is key to the future of defences against malware.
Andrew RoseForrester Research
Only by using cloud infrastructure is it possible to scan, analyse and compare unknown software with a variety of malware databases, according to George Anderson, Webroot’s senior enterprise product marketing manager.
Rather than put a comprehensive malware signature file on each endpoint, malware intelligence and assessments are conducted in Webroot’s cloud environment.
Because the software client does not have to receive and process signature files, the software client has a much smaller footprint than traditional software clients.
A cloud-based approach, Webroot claims, means there is no need for continual updates of the software client, faster scans, low impact on system resources and improved effectiveness.
Webroot backs up the low performance impact claim with benchmark tests by PassMark software in which the security supplier scored 78 out of 80 or 97.5%, compared with the 55 out of 80 or 69% scored by its closest competitor.
According to Forrester Research, the move to using a cloud-based intelligence database to deliver real time threat protection is an established trend with most of the major security players making investments in this area.
Security firms have realised that, by leveraging their install base, they can collect information about file behaviour and start to make trust-based decisions.
This encompasses the simple white- and black-listing of files, yet steps beyond this, allowing users to define their own level of risk tolerance for unknown files, said Andrew Rose, principal analyst in security and risk at Forrester Research.
However, he said, although the cloud-based solution has many benefits, he has some concerns.
“Relying entirely on cloud leaves the endpoint to fend for itself when it is offline. Although sandboxing may offer some assistance, I would be seeking assurances that the local security agent would be sufficiently resilient and flexible to enable sophisticated functionality and ensure protection in an operating system built for collaboration, rather than segmentation,” said Rose.
Similarly, he said, the level of protection relates directly to the strength of the provider's intelligence network and this is an area where the established players, such as Symantec and McAfee, have a significant advantage - with billions of existing file trust records and a growth rate of 10s of millions each week.
“Although a cloud-based solution has lots of value, I am still drawn to the hybrid approach, where expansive cloud intelligence networks are supplemented with local behavioural analysis of files, local file activity restrictions and resilient local sandboxing,” said Rose.
This is where Webroot seeks to differentiate itself from traditional signature-based systems as well as other security firms that have seen the potential of cloud-based security intelligence.
Webroot’s systems focus on the behaviour of files that try to execute on a system, regardless of whether or not Webroot has seen that file previously and have a cloud-based signature for it.
Any unknown file is monitored and its behaviour recorded as it tries to execute, said Webroot’s George Anderson.
“Once it is deemed malicious, it is placed in a sandbox on the client for isolated execution and deeper behaviour analysis, while any actions the file may have taken are automatically rolled back to return the system to the last known good state, reversing only the changes that the suspicious file made,” he said. This means that even while unknown malware is active, systems are protected.
Webroot seeks to address the concern about protection while offline by using offline heuristics tuned to the endpoint’s pre-offline software profile to identify and block threatening behaviours from a new software program introduced while the device is offline.
The Webroot client also records changes to files, registry keys and memory locations associated with new software introduced while the device is offline. This process is beneficial if the heuristics did not trigger blocking but the new software is, in reality, malware.
Once the endpoint is back online, a threat assessment is conducted in the Webroot cloud. If the program is determined to be malware, the malicious file is removed and Webroot returns the endpoint back to its last known good state. However, this is possible only with some behavioural analysis capability.
While cloud computing does appear to have the potential to tackle new and emerging cyber threats, it also appears that this alone will not be enough and needs to be paired with a comprehensive behavioural analysis capability to deal with zero-day threats and any periods where systems are offline.
Despite confirming the trend identified by other similar studies, Rik Ferguson, research director at security firm Trend Micro, believes the methodology of the Imperva research is flawed.
“Simply scanning a collection of files – no matter how large or how well sourced – misses the point of security software entirely; the actual file, the payload is simply one link in a long chain of events, and one that is pretty much towards the end of that chain,” said Ferguson.
The Imperva study, he contends did not expose the security products to threats in the way that they would be exposed in the wild.
According to Ferguson, to decide whether or not a threat would be blocked, it must be processed in a test in the same way it would be delivered to the victim.
“File reputation only represents one layer of security, one interlinked technology among many in any security solution worthy of the name,” he said.