In a recent Virtual Conference held by ISACA, Atul Shah, a senior security strategist with Microsoft, quoted a study conducted by his company saying that "even though the majority of respondents are very interested in the possibilities of cloud computing, a whopping 90% are concerned about security, access and privacy of the...
their data in the cloud".
And rightfully so! As it turns out, security in the cloud is quite a different kettle of fish. Unlike the proverb, these ones just don't start rotting from the head.
But first let's take a step back and recap typical public clouds, which
provides many instances of virtual machines. Also, the term "cloud" implies that the user doesn't know where exactly the server running his instance is located. In the popular Amazon EC2 cloud for example, the administrator can only choose between four data centres located in three countries, namely the USA, Ireland, or Singapore.
Cloud instances are predominantly based on Windows Server 2003 or 2008 or some flavour of the Linux/UNIX operating system, e.g. SuSE, CentOS, RedHat, Debian or Ubuntu. Linux is very popular because of its stability, ease of remote management and patching and there are no OS licensing costs involved. Also a large number of applications and databases are open source as well, making the same server instance 30% to 50% less expensive.
The virtual machines (VMs) usually have one interface with a public IP address and a private one connected to the cloud provider's internal network, allowing them to exchange data between VMs within a site without being charged for the internet traffic. Of course, these interfaces are all virtual as well. The physical server might run 40+ virtual machines but might only have three or four physical network cards installed.
The operating systems, all the applications and the application data and databases are always stored on physically separate devices, either in a storage area network (SAN) or network attached storage (NAS). This is fundamental to allow our virtual machines to potentially run on different physical servers as the cloud provider sees fit. The new marketing term for this is 'elastic'. It means that the service provider allocates virtual machines on physical machines as long as they can keep their service level agreement (SLA) with their customers.
In many cases, what they offer is deliberately vague, like a 'small', 'large, or 'extra large' instance, allowing them to oversubscribe their hardware resources (processor, memory, and bandwidth) until performance issues crop up. This strategy is understandable, as they want to maximise their profit and minimise their hardware expenses.
So how does IT security in a cloud compares to the level we are expecting in our own infrastructure? Furthermore, how does it compare to classic hosting solutions?
At a high level, we can break down these questions into four general areas: physical security, OS or platform security, network security, and application security.
Going top to bottom in the "IT stack", or head to tail with respect to our kettle of fish, application security is the one that actually is on par between the cloud and a local server.
We would run the same applications and software releases with the same potential weaknesses and bugs, because our business processes rely on them. Keeping the applications up-to-date ensures that known holes are hopefully plugged before someone decides to attack us. Whether our webshop is running in the cloud, at a hosting provider down town or next door doesn't matter to the bad guys.
A common misconception here is to think that "we are a small company, not many people know us and we don't have much information that would be of interest to someone else".
Firstly, your customer database and price calculations would always be of value to a competitor at the other end of town, but more importantly, the majority of attacks are not targeted at you or your business, but at the infrastructure you could provide unintentionally. As unbelievable as it sounds, the average time it takes for an unpatched Windows XP machine connected to the internet to become infested with malware was 9 minutes a few years ago. Depending on the sources you quote today, this number now came down to 4 to 5 minutes.
This is due to automatic port scanners that probe each of the roughly 4 billion possible IP addresses for a set of open ports, running applications and known vulnerabilities. These scanners themselves are in most cases innocent servers or home PCs, themselves infected with malware that runs silently in the background, unbeknownst to the user, and form part of a botnet.
Some years ago, a big bot-net like Toxbot commanded more than 100,000 computers used to run Denial-of-service attacks against web servers of big US companies in an extortion attempt. It was shut down in 2005 by Dutch police. Earlier this year, the Waledac bot-net churned out 1.5 billion Spam emails per day from about 70,000 zombie PCs. In the Waledac case, more than 200 domains were used as a home base to make it harder for the law enforcement agencies to shut them all down.
There are several more bot-nets scanning and spamming at the moment. One of biggest active one is Zeus, even though more than 100 people involved in it have been arrested in the US and the UK in October 2010. It is believed to have laundered US$260 million already.
Hence patching all applications to the latest available software release before the VM ever goes online is the best approach here. You also need to set time aside to configure the apps in the most secure way according to the manufacturer's best practise guidelines, no matter whether it's in the cloud or your own server room.
Going down in our "IT stack" is the TCP/IP network that allows us to exchange mails, use services on a non-local machine or place cheap phone calls via the voice-over-IP service (VOIP). The local server in our DMZ can readily be protected by a state-of-the-art Web-2.0-enabled firewall. Some hosting providers also offer firewall services, albeit mainly stateful port-based filters.
Once we have opened ports 80 and 443/tcp for our customers to access our web shop or our employees to read their web-based e-mails while on the road, these devices don't care about anything else that happens to our online presence or customer relationship management (CRM) software plus precious databases that are connected to it.
So on the network layer there is a low level of protection for our assets in the cloud and it starts smelling more and more fishy. The best protection here is an intrusion prevention or detection system plus a deep-packet inspection firewall in front of it.
Operating system security
Working our way further down is the operating system. Whether it's UNIX or Windows, it needs patching. The sooner the better, since the bad guys don't wait for Microsoft's second-Tuesday-of-the-month-patch to launch their attacks. They neither care about Adobe's quarterly patch-cycle of their bug-riddled PDF reader, AIR and flash player implementations. To make things worse, sometimes patches don't just fix some holes but break something else in the course that used to work just fine before.
So who takes the responsibility here? If you rent a virtual machine in the cloud, the cloud provider offers you a plethora of operating system flavours they can provision up-front. Afterwards, who is in charge of it? Well, more often than not, the onus is on the cloud customer. If the cloud provider offers to patch the OS, can you tell them at what time your maintenance window is? Or do you have to adapt to their patching cycle and perform your regression testing when they dictate it?
The relief here is that today’s operating systems rarely have critical bugs anymore that are remotely exploitable. That's one of the main reasons why security professionals saw a shift away from kernel or network attacks towards application-level attacks like the widely popular DNS-redirects, cross-site scripting and SQL-injection attacks used by the botnets. These are computers spread worldwide that have already been infected by some malware and now listen to commands from the botnet master servers without the knowledge of the legitimate owners.
So the OS itself in the cloud is just as secure as the one we deploy in a hosting environment or our DMZ, as long as it's being patched religiously.
At the tail end we have the physical security.
The issue here is that cloud providers generally don't allow physical access to their data centres (DCs). Not even for an auditor, unless it's required for the provider itself to get audited. This is understandable, as it would be too expensive for them to coordinate access for every auditor of every client they have in the cloud. They also don't want to reveal their physical security measures to every customer. Hence we have to rely on the enforcement and audit of their security policies. For a small business with very basic or no access limitations to its in-house server room, the physical security of a cloud VM can be seen as higher than what they would have in house. But for a company that's serious about access control to their digital crown jewels, the cloud cannot compete.
A rogue or disgruntled employee of the cloud provider can gain access to the primary storage area which contains the virtual hard disks of all the VMs in that environment. She can also gain access to secondary storage and backup tapes, unless they are properly encrypted. And then only if you are the only one who has the key to your backups.
Again, there are ways to mitigate this problem. A hosted solution at a local provider would be a sensible compromise for an SME, since they can physically audit the environment whenever they want but don't have to invest heavily in access security, uninterrupted power supply, air conditioning and network failover for the internet link.
If the VM has to be in the cloud, the virtual had disk should be encrypted with a strong passphrase that has to be entered manually when booting the machine. That way, the key is never stored on disk outside our control. A typical Windows cloud server only gets rebooted once after a major service pack has been installed, a UNIX machine maybe once or twice a year.
The silver lining
So what else can we do to improve our security stance above what's being offered by one of the big cloud providers?
From my experience as an auditor, encryption of data in transit and at rest sometimes causes headaches. This can be tackled by encrypting the storage for the application data as mentioned above. One good, and free, product is truecrypt (http://www.truecrypt.org). If a snapshot or tape backup of the raw disk is then being performed by the provider it's already encrypted, and most importantly, you are the only one who has the keyfile or passphrase!
Encryption of data in transit as well as secure administrator access to the VMs can be handled by an IPSEC-, ssh- or some other VPN-type tunnel.
Furthermore, a tight host-based firewall configuration prevents attacks on ports that are not used by our applications but would be open from the underlying operating system or supporting software.
The cloud is here to stay and there are many reasons to use it to its full extent. As long as the limitations are understood and risks are mitigated through proper controls, the cloud provides an easy, reliable, flexible and affordable platform for our IT needs.
Nevertheless, it doesn't take the responsibility away from the business owner to ensure the three pillars of IT security: confidentiality, integrity, and availability.
Hence the guild of security professionals will have even more work in the future to keep the rotten catch at bay.
Heinz Zerbes is a senior security consultant at SureCity.