As SOA environments with services based on the integration of distributed, composite applications become more prevalent, monitoring, managing and reporting on application and service performance management become both more critical and more complicated. There are significant differences in effectiveness between management based on end-user experience monitoring and what is known variously as end-to-end or runtime monitoring. A more detailed analysis by Ptak, Noel & Associates is in the works, but the subject's increasing importance merits an introduction to some of the major issues -- hence, this column.
Early on, it was commonplace to deduce application performance and availability by monitoring infrastructure performance. The next step forward was to monitor and report on the experience of the end user -- tracking and reporting inquiry response times, for instance, along with attempts to correlate these times with siloed information about infrastructure performance.
As application structures changed to a dynamic assembly of components -- as, for example, the assembly of information from multiple sources as portlets on a single Web portal -- such techniques were less and less helpful. Simply tracking such response times fails in its attempts to report actual service levels and as a significant aid to problem identification and resolution. These techniques raise the alarm when SLAs are not being met, but they cannot provide sufficiently detailed data for increasingly critical operational issues such as problem avoidance and remediation.
This is true because traditional implementation of end-user-focused monitoring doesn't provide:
- Enough information to localise the problem;
- Enough information to perform root-cause analysis;
- A proactive view of events -- only an after-the-fact look at the problem.
The result is dissatisfied customers and disillusionment in the benefits of SOA approaches.
One problem-solving approach asked operations to identify, buy, integrate and configure a number of independent tools to capture the additional cross-functional performance data needed to get to proactive monitoring. This represented a major challenge in its own right. For now, we will just ignore the added challenge of getting actionable information on cross-silo (functional) operational level agreements needed to support a truly comprehensive, service-oriented SLA. Even then, the challenge of localising the problem to a specific server or runtime task remains.
There are products designed to monitor and report on runtime performance, among them CA's Wily Intrascope, Tidal Software's Intersperse, and Symantec's evolving Application Performance Management solutions. These have been designed to:
- Provide sufficient information to localise the problem;
- Locate the problem not just to a server but often to a specific component or resource within that server;
- Allow correlation analysis of symptoms to pinpoint a specific root cause;
- Provide proactive alerting on the basis of trend analysis of resource utilisation and component performance.
No perfect solution exists because tool functionality and the amount of integration required vary from vendor to vendor. For example, tracing the performance of an individual instance of a business process or end-user interaction at the component level from server to server or from server to back-end database may work only if you use the same server product each step of the way.
Others, such as ClearApp's QuickVision 7, provide tracing through more of the system by following the performance of user interactions from the front-end portal to back-end EJB servers. Still others, like the recent versions of Tidal Software's Intersperse, allow tracing through more sophisticated systems, such as those including portals and components, and also through BPM engines such as WebLogic Integration and across messaging layers, such as JMS.
The final word
The ability to monitor and manage across functional and technological silos will only become more critical as SOA deployments proliferate. Organisations that get the combination right will reap the benefits of an SOA. Those that don't will simply chalk up SOA as another computing fad and retrench -- to eventually join the laggards in adoption.
Here are some steps to keep in mind to avoid being in the latter group:
- Get educated about runtime and end-to-end monitoring in order to be prepared to proactively manage SOA deployments from the very start.
- Consider all the layers, servers and protocols you support to avoid the need to buy expensive and difficult add-ons that will need costly framework integration.
- Understand that you want tools that will meet the needs of the entire SOA support team, including operations staff and SOA infrastructure administrators, as well as the developers producing the business logic.
- Realise that any chosen solution should be designed for easy deployment and should be noninvasive.
Following these steps will provide a good start down the path to success with an SOA approach.
Seen or heard of an interesting product? Have comments or suggestions? Send them to me at: email@example.com.
About the author: Richard Ptak is founder and partner at Ptak, Noel & Associates. He has more than 30 years of experience in systems product management. He was VP at Hurwitz Group and D.H. Brown Associates and worked at Western Electric's Electronic Switch Manufacturing Division and at Digital Equipment Corporation. He is frequently quoted in the trade press and is the author of Manager's Guide to Distributed Environments.