The early stirrings of cloud computing are beginning to take place, with mixed results. Totally contained cloud services, such as salesforce.com, Concur and Transversal show how full-service functions held in the cloud can facilitate a business's processes without costly hardware - but is this going to be the case going forward?
The problems will grow as cloud becomes less contained. If there is a move toward the composite application - one where different functions are brought together on the fly to deal with a process issue - there will be a whole new raft of issues. Here, Clive Longbottom sets out how to choose the right service provider to avoid networking problems affecting composite applications in your business processes:
Looking at how a composite application works, a user makes a request for a task or for a process to be dealt with through their access device. This will generally be handled through a trusted partner. The trusted partner takes responsibility for identifying and pulling together the functions to facilitate that task or process. Some of these functions may already be available within the user's own datacentre (through a private cloud, or just through a standard function in an existing application), others may be hosted by the aggregator themselves; many will not.
It may well be that other providers already pull together multiple functions to provide a "functional assembly" that deals with a large part of the task. This has multiple benefits. Such a provider can:
- Reduce the number of links in a chain;
- Minimise the number of contracts involved;
- Improve response rates by aggregating functions in a specific manner, for example through using providers who are all hosted within a single co-location facility.
The aggregator will have to negotiate on-the-fly technical and business contracts with other providers that it knows and trusts, and ensure that everything meets the needs of the end customer.
So far, so good. But now let's look at the underlying network issues that raise their ugly heads in such a scenario. There are a number of areas where cloud networking may be an issue, and a number of actions which can be undertaken to resolve them:
The internet is under multiple ownership. This is fortunate, as it offers a multi-path network capability such that, if any one part of the internet goes down, availability tends to remain due to the capability for alternative paths to be taken. But it is also unfortunate, as it means root-cause analysis for network issues can be, at best, painful. Most good cloud function providers will use multiple networks to ensure high availability - but some may only have a single provider, and even if the internet itself is OK, a break in service from that single provider can lead to the lack of availability of a core function for the composite application. Look for providers who have multiple network providers and, where possible, multiple datacentre facilities to ensure functional availability. Also, ensure you have multiple network providers yourself - a break in your connection will mean no access to any functionality.
The very capabilities of general availability outlined above means performance across the internet is difficult to guarantee. As a packet of data can take any route it wants, it could take different amounts of time dependent on network conditions. Look for cloud providers who offer quality and priority of service using services such as multi-protocol labelling service (MPLS) and 802.1p/q. Also, look at the use of other tunnelling or direct connecting services, such as leased or dedicated lines where core functions are concerned.
Should there be a failure in the chain, what happens? A good cloud provider will have multiple instances of a function running, and should be able to failover gracefully to another instance. This will require the maintenance of certain network information, however, otherwise transactions may become confused.
All functions must be fully cognisant of what they are doing, and make this information available, should any failure occur. Store and forward messaging is required in the cloud, so any break in service provides a known state, and that this is one which can automatically resume when the failure has been addressed. Look for a cloud provider who offers a fully audited store and forward capability as part of their service.
A knock-on issue from providing failover capabilities can be that it becomes easier for a chain to be hijacked by a malicious user. If they can inject themselves into the chain when the failure happens (which may well have been initiated by the hacker), the process can continue blithely unaware that the whole chain has been compromised in this way. Ensuring no part of the chain can be hijacked this way, through the use of full contextuality, audit trails and cloud-based intrusion detection capabilities, will mitigate this issue.
As well as ensuring that the process chain itself is not hijacked through functional injection, you must ensure the information in the chain cannot be compromised. Data-leak prevention, content inspection and encryption will help here. Look for cloud providers who offer these as functions - but remember the information is yours, and as such, the over-reaching responsibility for your cloud computing security issues resides with you.
To meet the promise of a fully functional environment, the cloud needs a solid foundation, and this will be dictated by the network it is dependent on. Second-rate cloud providers will be the ones who give cloud a bad name through poor performance, availability and security. Through meeting the above criteria, cloud aggregators will be able to provide enterprise-class services through providing certain functions themselves and using functions from other in a fully managed and audited manner.