agsandrew - stock.adobe.com
On 3 January 2018, Google’s Project Zero reported that a technique used to improve the performance of modern microprocessors could be abused to efficiently leak information, leading to what it described as “arbitrary virtual memory read vulnerabilities across local security boundaries”.
A proof-of-concept of Spectre was shown to work on Intel Haswell Xeon, AMD FX, AMD PRO and ARM Cortex A57 processors.
The flaw could allow cyber criminals to steal the entire memory contents of computers running in cloud networks, including mobile devices, personal computers and servers.
While it is complex, and requires a deep understanding of microprocessor architecture, Spectre relies on how modern processors optimise the way they run instructions.
Preying on processing flaw
Mike Hamburg, a senior security engineer at Rambus, is one of the researchers who independently discovered the flaw.
“Most complex processors, regardless of whether they are based on reduced instruction set [Risc] or complex [Cisc] architectures, utilise speculative execution, where the processor makes guesses at future computations to help boost performance,” he said. “But, in doing so, processors can end up performing work that may not have been intended.”
Mike Hamburg, Rambus
Hamburg said Spectre uses mistakes the processors make when guessing future instructions to pry privileged information from the memory, through what is known as a side-channel attack.
“While Spectre is hard to mount, it is much harder to defend against, as that will require physical changes to the processor,” he warned.
“The Spectre vulnerability happens because CPUs [central processing units] don’t only run the instructions you tell them to. They also look ahead, trying to guess what instructions your program will run next so they can get a head start.”
For example, if a program has gone around a loop five times, the processor might speculate that it will go around a sixth time. This leads to big performance improvements if the processor has correctly guessed what the program wants to do next and can make changes to memory and data caches in advance. But if its prediction is wrong, it needs to undo these changes.
Raiding processor memory
Hamburg said high-end processors can look ahead at hundreds of program instructions, but when their predictions are wrong they do not undo the changes they make, such as removing pre-cached data from memory.
The researchers demonstrated that they were able to exploit the flaw in the microprocessor to read physical memory from a PC at 503Kbps, without the need for privileged access. This could enable an attacker to obtain secret information stored in memory, such as a password or encryption key, very quickly.
Given that processors make their predictions based on behaviour from programs that ran recently – within the past few seconds or milliseconds – a hacker can create a rogue program to train the processor to predict incorrectly, said Hamburg.
To protect against vulnerabilities, Hamburg said new processor designs need to be secured at the processor’s core: “Embracing a hardware-first strategy, chips should be designed to run cryptographic functions in a physically separate secure core, siloed away from the CPU.”
While Spectre has shown how the microprocessor itself can be compromised simply by understanding how it performs instruction optimisations, it also opens up a whole area of timings and how time itself can be used to reverse-engineer secret data.
Rambus has looked at how the side-effects of computation, such as timing, power consumption and radio emissions, can be exploited to leak information about what a computer is doing.
“Timing information is a very important side channel because it can be exploited directly in software, without physically accessing the machine. Lots of things affect timing of computer chips – not just the instructions themselves, but memory caching, branch prediction, heat, even power consumption,” warned Hamburg.
Since this area has been studied for a long time, there are best practices in defending software from timing channels, according to Hamburg.
All processors use what is known as a program counter to determine which instruction in a computer program is run next.
Hamburg said a common practice in secure coding is to prevent secret information from flowing to this program counter, so “different instructions will almost always have different timings”. Intruders could potentially reverse-engineer a security key if they determined the length of time each computer instruction in the encryption program takes to run.
Since the program counter and the instructions being run will affect the performance of other software on the system, Hamburg warned: “If information about your secret flows to the program counter, then other software on the machine can measure that information, which could leak your key to otherwise unprivileged malware.”
This type of attack was identified as far back as 1996 by security researcher Paul Kocher. In a paper describing such an attack, Kocher stated: “By carefully measuring the amount of time required to perform private key operations, attackers may be able to break cryptosystems.
Mike Hamburg, Rambus
Hamburg said a change in the instructions being run on a microprocessor would also change the time required for signing or decryption operations, and “these changes may be visible through the network”.
“Even when a program is written to defend against timing attacks, the defences can be breached if the CPU ‘speculates’ that the program will do something wrong – and an adversary on the same machine can influence that speculation,” he said.
Hamburg said it was extremely difficult to get speculative execution right, making security much more complex. He said the semiconductor industry needed to work together to formulate a new set of best practices for more securely designing computer chips.
“From our perspective, securing processors should start at the core. Embracing a hardware-first strategy and implementing the necessary functionality on the system on a chip level is a key element of fully securing devices and platforms across multiple verticals.”
According to Hamburg, embedding a separate security core into a system on a chip (SoC) can help manufacturers design devices, platforms and systems that remain secure throughout their respective lifecycles.
There are three variants of the Spectre vulnerability. Variants 1 and 2 appear to affect AMD, ARM and Intel processors, while the researchers could only show a proof of concept for Variant 3, also called Meltdown, on Intel chips.
In his keynote presentation at the Consumer Electronics Show in Las Vegas, Intel CEO Brian Krzanich said: “For our processor products introduced in the past five years, Intel expects to issue updates for more than 90% of them within a week and the remaining by the end of January.
“We believe the performance impact of these updates is highly workload-dependent. As a result, we expect some workloads may have a larger impact than others, so we will continue working with the industry to minimise the impact on those workloads over time.”
AMD said: “The research described was performed in a controlled, dedicated lab environment by a highly knowledgeable team with detailed, non-public information about the processors targeted.”
AMD has issued a patch for variant 1, which is being made available to system providers. It claimed its processors were not affected by variant 2. ARM has issued kernel patches to mitigate against Variant 2 and provides instructions for protecting against Variant 1 on Linux.
Read more about Spectre and Meltdown
- Amazon, Google and Microsoft rush to fix chip flaws that could leave cloud customers at risk of having their data accessed or stolen by other users.
- Apple has confirmed that all iPhones, iPads and Mac computers are affected by the recently discovered microprocessor exploits as the financial services industry assesses the risk.
- Computer Weekly looks at how enterprise IT and security professionals should be approaching the Meltdown and Spectre threats.