Continual service improvement (CSI) is a key tenet of IT service management, a fundamental part of the ITIL service lifecycle and the principle that underpins the ISO/IEC 20000 service management standard. Without a mechanism in place for managing and measuring the effectiveness of services, and a means of gathering input to that process from all relevant parts of the organisation, it becomes extremely difficult to identify opportunities for improvement.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Part of the problem is putting the right metrics in place. IT professionals are traditionally very good at gathering performance data that appears to prove they are doing their job effectively, and there are numerous tools on the market to help them slice and dice this data in different ways. But do these metrics provide answers to the right questions? Technical data only becomes information when it is understood by both the supplier and the consumer of the service to which it refers, and can assist both sides in determining where potential problems lie and how to solve them. Identifying the right metrics, and incorporating them into the service level agreements that define the relationship between IT and its customers, are key to building successful CSI. As industry pundit Rob Stroud commented in a recent issue of ServiceTalk magazine, “We need to identify which metrics are suitable for aggregating and exposing to the business, and remove the remainder and leverage them for the purpose for which they were designed.”
But while the measurement of service effectiveness is a key part of CSI, it’s equally important to build a CSI culture across the organisation that allows everyone to contribute to the process of improvement. Recent research by itSMF UK shows that some businesses are clearly much better at this than others. In our research among service managers, 76% of respondents said they have a formal CSI programme within their organisation. This is a very encouraging sign, but it does beg the question how far the programme pervades the business and who is driving it.
When we asked respondents whether anyone has direct responsibility for CSI, the picture is more mixed. 32% said they have a full-time service improvement manager, 32% part-time, and 34% no improvement manager at all. Once can only speculate how successful an improvement programme with no manager at the helm is likely to be.
Slightly under half our respondents said they track the business value of service improvement, while 56% incorporate CSI into their team or individual objectives. Again a very mixed picture, and as CSI consultant Jo Johns observed, “Only by tracking and reporting on business value and benefit are we likely to be able to continue with meaningful CSI activities in the future. Remember the adage, 'If you don't measure it, you can't manage it' - and you certainly can't provide evidence-based feedback to the decision makers in the organisation unless you are tracking, monitoring and reporting.”
But perhaps the most worrying sign in this survey was that over half of organisations had no clear mechanism in place for staff to contribute improvement ideas. Without such a mechanism, we’re losing a golden opportunity for feedback from the service users, who are often the people best placed to identify scope for improvements. To attempt CSI without involving all parties is surely a pointless exercise.
There are many ways to encourage service improvement across the organisation: training, networking, special interest groups and professional development schemes such as the itSMF’s priSM initiative all play an important part. But opening up the CSI discussion in a clear and simple way to the whole user community is perhaps the most important element of all, and will do wonders to enhance the IT/business relationship and the perceived value of IT.
Mark Lillycrop is marketing and publications manager at itSMF UK