Paul Fleet - Fotolia
It’s been an unusual few weeks. Since the massive Sunburst supply chain compromise attacks which exploited a backdoor in organisations’ SolarWinds Orion network management software, my team’s day-to-day activities have changed: we’ve spent a lot of time doing vulnerability and compromise assessments for companies alongside our usual work of remediating actual breaches and cyber incidents.
Naturally, organisations that use SolarWinds are concerned that their networks may have been exposed to the vulnerability, or have been breached.
So we’ve spent a lot of time on calls with companies, walking them through the relevant steps to find out if they were using the vulnerable versions of the SolarWinds Orion suite and, if they were, helping them to assess if their systems had been compromised and guiding them through the process of removing the backdoor and updating their systems. The good news is that most of our assessments resulted in no breaches being found.
Then, just when this Sunburst-related work was starting to tail off, news of the Hafnium exploits of Microsoft Exchange vulnerabilities broke, launching my team into another round of compromise assessments and helping companies to patch and update their systems. It reminded of me of the situation in cyber security five to 10 years ago, when web shells were common.
Back then, good security practice involved finding out which web servers were exposed to the internet, and mitigating risks by regular patching and updates against vulnerabilities, deploying a demilitarised zone (DMZ) between web-facing servers and internal networks, closing ports which were not used, and deploying two-factor authentication (2FA) for admin access to servers.
The SolarWinds and Exchange vulnerabilities highlight just how relevant those security fundamentals still are today.
Journey to the DarkSide
After a lot of compromise assessment calls with companies, you can find yourself thinking that it would be nice to have a cyber incident that you can really get your teeth into. Well, be careful what you wish for…
A call comes in from a large organisation that’s been hit by ransomware. We find that it’s the relatively new and aggressive DarkSide ransomware, which we’re seeing more and more of.
Initially, the attack seemed to be not too different from other ransomware variants – the attackers find a way onto the target network, exfiltrate data, deploy the ransomware from a domain controller, and leave instructions for the victim to contact them to negotiate the ransom. But it turned out to be far from a routine ransomware incident.
We spent days working with the customer, trying again and again to find any trace of the root cause of the attack while the customer’s IT team recovered its systems and data. But the group behind the attack has anticipated our actions and created a group policy object that creates a scheduled task on all machines to delete event logs every 12 hours.
This means any evidence we could use to trace the attack disappears. The company’s firewall logs don’t last long either and are not exported to a SIEM system, so by the time we’ve got to the logs, there’s nothing that covers the time of the ransomware deployment, let alone the time before the deployment when the attackers were exploring the network.
So we deploy scanning technology to see what we can find. We see lots of infected machines, powershell leftovers, multiple remote admin tool leftovers – but, unfortunately, these are not really clues about what has happened, it’s more like examining the debris after a bomb explosion.
We still have no firm idea as to how the attackers got in, where they’ve been on the network nor what they’ve used, let alone anything we can attempt to block, mitigate or contain.
Finding the enemy within
A couple of days in, we get an urgent phone call from the customer late in the day: they’ve just received a message from the attacker that was sent via their internal network. S**t!
The attacker has been able to cover their tracks and is either still inside the network, or still has remote access. We’re on the phone with the customer until 2:30am, trawling through logs and firewall alerts to decide what, who and where to block.
Then, I discovered something new which gave us a breakthrough. In Microsoft Office365 logs, there is a DeviceID along with the IP address that can be searched in Azure Active Directory to give a specific machine’s name.
While the IP address is no use as it was of the customer’s datacentre from which the attacker came in, being able to identify the actual machine from which the attacker sent the message was the vital clue we needed to enable us to start resolving the incident.
Several days later, we’re still speaking with the customer on a daily basis as they find something else in their environment that is concerning them. This is quite common after an organisation has been breached – their IT and security teams are naturally worried that they may have found signs of a new attack, so things can appear suspicious even when they are not.
We suggest the company sets more aggressive firewall rules to block the majority of outbound traffic and only allow what’s absolutely necessary for the business. We’ve also suggested they work with a partner organisation that delivers a managed security information and event management (SIEM) service to help with identifying further indicators of compromise. Case closed, hopefully – and all because I learned a new trick.
The Secret IR Insider works at cyber security services and solutions supplier Check Point. A specialist in incident response (IR), they are on at the front lines of the ongoing battle against malicious cyber criminals, ransomware, and other threats. Their true identity is a mystery.