Cloud computing provides the convenience and cost-saving benefit of placing computing and storage resources on demand, all without the need for internal infrastructure. As the technology arrangement becomes more popular, however, additional cloud computing security measures are necessary to ensure the continued protection of the integrity, confidentiality and availability of enterprise data.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In part 1 of this chapter excerpt from The Shortcut Guide to Prioritizing Security Spending, author Dan Sullivan reviews the data security and compliance benchmarks that must be established between you and your cloud provider.
The Shortcut Guide to Prioritizing Security Spending:
Chapter 3: Security and the Dynamic Infrastructure
There are many ways to organize cloud computing offerings into various taxonomies; for our purposes, we will focus on two distinct categories: desktop software replacement services and back-office infrastructure.
Desktop Software Replacement
The time and cost of maintaining desktop software may be reduced with the advent of cloudâ€based desktop software replacements. With a cloud-based service, organizations may lower software licensing costs and reduce maintenance overhead. Google Docs, Zoho, and ThinkFree Office are a few examples of cloud-based alternatives to traditional desktop software. These services provide the core functionality one would expect from a desktop office suite, including word processing, spreadsheets, presentation software, and in some cases, databases. Zoho, for example, shows how far cloud-based services can be pushed with additional support for online document management, project management, customer relationship management (CRM), and human resources applications. Virtualized desktops running office suites on servers within the enterprise have some similarities to cloud services but are a distinct model that is different from cloud computing.
Back-office infrastructure includes servers and storage arrays as well as higher-level, application-specific functionality. Collectively, the higher-level functions are described as "X as a Service" where X could include middle-tier services, such as databases and applications services or broader services such as CRM, HR, and security management.
One of the distinguishing characteristics of back-office offerings is the level of control over management and design the customer retains. Consider three example scenarios:
- Scenario 1: A customer purchases access to a servers and storage services as needed. The customer determines which OS is run on the servers, when the servers are started, how long they are run, and what level of access controls will be applied to the server. Storage is allocated as needed, and the customer retains responsibility for backup and disaster recovery. (In which case, the customer may assume that the cloud providers redundancy in the storage service is sufficient, but that is a risk-management decision that may not be appropriate for all customers).
- Scenario 2: The customer purchases a database and application server service. The customer determines which database and application server it will run and the number of instances of each. The cloud service provider manages the physical aspects of the database, ensuring space is allocated on underlying file systems and the database is sufficiently patched and properly configured. The customer designs the overall database architecture and monitors performance, but the cloud provider attends to implementation details.
- Scenario 3: A customer purchases a cloud-based enterprise application service, such as an HR management system. The customer manages data in the system and determines user access and privileges, but relies on the service provider to ensure availability of the system, appropriate backup and recovery operations, architecture and application design of the system, patching and performance monitoring.
As these three scenarios demonstrate, cloud computing is a range of services defined by customer and service providers finding an appropriate distribution of labor between the two. In Scenario 1, the customer retains the most control but also has the most responsibility for developing and maintaining applications. In Scenario 3, the customer has the least responsibility for management details and, presumably, the least control over implementation details. Choosing the right combination of services is largely dictated by the customer's core competencies, ability to design and maintain IT applications, and the fit of service provider offerings to the customer's business strategy. Security considerations, in many cases, will factor heavily in cloud computing decisions.
Security Considerations with Cloud Computing
Regardless of whether a customer uses desktop software services or some combination of back-office applications and services, there are security issues to take into consideration:
- Encryption and other
- data security measures
- Availability and service level agreements (SLAs)
- Compliance with government and industry regulations
- Ensure cloud based applications are secure
As with the types of services offered, cloud service provider offerings can vary in their security characteristics.
Encryption and Other Data Security Measures
One of the first questions to consider about cloud security is: What could happen with your data? Confidentiality is an obvious concern and encryption is usually part of the solution when confidentiality is required. There are different ways to approach the use of encryption.
The cloud provider could encrypt data stored in its data centers. This is an approach taken by Amazon's S3 storage service. Customers generate a key that is associated with an account, and data is encrypted with that key when it is stored in the cloud. The advantage of this approach is that all data stored to the cloud is encrypted. The disadvantage, at least to some, is that the cloud provider controls the encryption process.
An alternative approach is to encrypt data locally before sending it to the cloud. This setup might appeal to those who need to maintain finerâ€grained controls over the encryption process, but there is the possibility that someone could upload confidential data that has not been encrypted.
Another advantage of encryption is that cloud providers would have less need to sanitize data blocks after they have been deallocated by a customer. The data is encrypted, so the next customer to use that data block, if they could read it before writing to it, would not be able to make sense of the data (assuming strong encryption and private keys of the previous user are not known to the current user of the data block). If data is not encrypted, there is more need for sanitizing storage before allocating to another user.
One way to evaluate cloud storage is to consider how the cloud provider's practices compare with the ones used with physical devices in your own company or organization. For example, when a server is removed from service, the hard drives are probably overwritten using some kind of hard drive overwrite software so that data cannot be recovered after you lose physical control of the device. Is the functional equivalent of disk overwriting available from the cloud provider?
Availability and SLAs
Highly distributed architectures, such as used in cloud environments, can take advantage of redundancy to ensure availability. If a data center on the East Coast of the US is inaccessible, customers could still access their applications and information using a data center in the Midwest. A bad controller in one disk array would not result in lost data because the same data is written to multiple other storage devices. This is the theory, at least, when it comes to availability. In practice, well-defined SLAs trump theory. Availability and SLA issues with cloud computing include:
- The total amount of contracted downtime over some period of time (for example,per month or per year)
- The longest acceptable continuous period of downtime; downtime in excess of that presumably results in compensation to the customer
- Backup services, if any
Regarding the last bullet point, with highly redundant systems, there is less concern from losing data due to a hardware failure because the latest data can be recovered from other data blocks. There are cases in which rolling back to earlier versions of data becomes necessary. For example, if an application bug corrupts data but is not discovered for days, would it be possible to restore the data back to the last-known good version of the database?
One should also consider cases in which cloud services are not available. If a cloud-based application or data storage service is unavailable for hours or days, how would that affect operations?
An essential but much more difficult question to assess, is how likely is an occurrence of unavailability? From a risk analysis perspective, one could use past performance as a basis for estimating the likelihood of an outage; however, past conditions may not be the same as current or future conditions. Cloud providers may have many more customers in the future and have to accommodate larger volumes of data. Will their architectures continue to scale? Are there potential bottlenecks outside of their control, such as an ISP that cannot scale up bandwidth as fast as a data center needs for peak demand? Of course, serious cloud providers build redundancy and sufficient capacity into their infrastructure, but these are still questions to consider when outsourcing computing and storage services.
Compliance issues will also require careful consideration. A CTO asked to sign off on a Sarbanesâ€Oxley Act compliance report will want to know their cloud provider's procedures and practices are sufficient to maintain compliance. There are a range of topics that could fall under compliance:
- Access controls to data to ensure that only users authorized by the customer have access to data
- The cloud provider offers protections to prevent potential abuse by administrators and other privileged users operating the cloud infrastructure
- When data is deleted, it becomes irrecoverable in all redundant copies and backups, if any
- Sufficient logging and monitoring is in place to meet compliance requirements Shifting responsibilities to cloud providers to meet some of the compliance requirements on a company should be done only after ensuring the cloud provider can actually meet audit and compliance requirements.
Infrastructure Security in the Cloud
When we put money in a bank, we usually assume it is safe. Banks have developed a security infrastructure and risk management procedures that have, at least until recently, presumed to be sufficient to protect depositors' assets. Even in cases in which individual banks fail, federal government guarantees virtually eliminate the risk of a loss. Some day, we may have the same level of trust and guarantees in the cloud computing industry, but they are not in place yet. Customers conducting due diligence on cloud providers will want to understand the providers' policies and procedures with regard to physical security in data centers, access controls, identity provisioning and de-provisioning, protection for data during transmission, disaster recovery procedures and guarantees, and employee background checks, to name a few.
Cloud computing is changing the economic equation of IT services, but along with the benefits come variations on longâ€understood security concerns. As consumers of cloud-computing services, we need to adapt our security strategy to accommodate these new concerns.
Cloud computing is not the only service that is changing how information is being delivered. The ability to move information quickly and inexpensively has enabled global business relationships, but it has also challenged security professionals to keep an eye on data as it goes from various manufacturers, headquarters and distributors around the world.
Another significant way in which IT service delivery has changed is the demise of traditional organization boundaries with respect to information sharing. The benefits of specialization and the ability to move information quickly and inexpensively around the globe is one of the enabling technologies of globalization. Distributed information flows are so prevalent now that we can, in the words of Thomas Freidman, view the world as flat. A business with headquarters in Chicago could have a manufacturing partner based in Shanghai, receive accounting and finance services from a company in Mumbai, look to a firm in Brussels for legal advice, and collaborate with a distributor in Buenos Aires.
Once again, we have an example of a compelling economic argument for an innovative way of doing business with significant security implications. We will consider three:
- Protecting data in transit and the demise of network boundaries
- Sharing data with trusted business partners
- Employees and personal information devices
As we will see, distributed information flows must be protected at a macro level (business to business) and at a micro level (business to employee).
Protecting Data in Transit and the Demise of Network Boundaries
Data moving between organizations can give the impression that network boundaries no longer exist. This is an exaggeration, but an illustrative one. Of course, business and organizations continue to use firewalls, network segments, and other means to isolate resources. At a physical and architectural level, boundaries still exist, but at the logical level of data flows, these boundaries are more porous than a network architecture diagram might indicate. Orders can flow from a sales management system to a manufacturing partner who then transmits data to the accounts receivable system which then issues an invoice to a distributor halfway around the world.
Protecting data in a highly distributed, multi-organization system such as this requires attention to:
- Data classification—Businesses need to know what data to protect. Not all data is created equal; some requires more protection than others, either for regulatory or business strategy reasons. Personally identifying information (PII), credit and financial information, and trade secret information should be governed by appropriate controls.
- Data in transit—Businesses need to know where protected data flows. Manufacturing partners may need some insight to a trade secret related to a product design but do not need customer accounting information. Information flows are dynamic, but they should not be free form.
- Confidentiality—Businesses, government agencies, and other organizations maintain substantial amounts of private information on individuals and businesses. State, provincial, national, and transâ€national regulations dictate protections of such information in many parts of the world. A data breach in a Mumbai data center can have multiple implications when lost data includes information on customers from California to the European Union (EU).
Encrypting communications is one control, but knowing appropriate data classifications and implementing controls on where data flows is also required to protect data in transit.
Sharing Data with Trusted Business Partners
Sharing data with trusted business partners has similar security implications to those found when utilizing cloud computing. First, you need some way to establish who you want to share the data with. Federated identity management systems allow for this by providing the means to determine who is a trusted business partner. After you have identified your trusted business partners, there are issues associated with compliance implications and data loss prevention.
With regards to compliance, a business must understand how the data shared with business partners relates to compliance requirements. A wellâ€formed and wellâ€managed data classification system can help organizations understand how data flowing out of the organization should be protected. Agreements between business partners can be used to bind parties to particular responsibilities regarding data protections, including measures to protect against data loss.
Employees and Personal Information Devices
Sharing data with other businesses or organizations is just one way protected data can leave the controlled infrastructure of a business. Employees using personally owned information devices are another.
The increasing use of personal devices for workâ€related tasks has created something of a grey area for IT security. On the one hand, these devices are not owned by the business or government agencies, so they are not generally at liberty to dictate what device the employee should purchase, what OS to run, or the applications that the employee should use. On the other hand, individuals downloading corporate data have a responsibility to protect that data. The meeting ground seems to be that businesses should establish policies and practices that define minimum security requirements for devices that will house company data. These can include:
- Establishing polices on the use of encryption, limits on the amount or types of data that can be downloaded, restrictions on backing up corporate data from a personal device, and requirements for the use of passwords or other means of authentication on the device.
- Network security professionals can also use network access controls to prevent devices from connecting to the network that do not meet minimal security standards. This can include proper OS patch levels and upâ€toâ€date antivirus software.
- Organizations can also provide security awareness training with an emphasis on data loss prevention and social engineering attacks. Corporate and government information is flowing more easily to devices controlled by other companies, agencies, and in some cases employees. The drive for efficiency and the willingness to adapt innovative processes will likely perpetuate and perhaps accelerate this process. Attending to the security implications is best done sooner rather than later in the adoption process.
To read the rest of Chapter 3: Security and the Dynamic Infrastructure, download the .pdf.