A configuration management database (CMDB) is supposed to be the source of accurate information about the environment IT is managing. With accurate information every IT task or process is more effective and efficient:
- Troubleshooting is faster
- Compliance auditing becomes a breeze,
- Analyzing resource allocation is easier
- Infrastructure changes produce fewer service outages that end up as front page news
Basically, what is happening is that IT administrators' daily to-do list is getting longer and longer, and the information they need to complete these tasks is changing faster and faster. Gone are the good old days when application architectures were simpler, users were employees, each application had a clear set of predefined transactions that were testable, service level agreements were based on server availability and software was upgraded on a quarterly schedule.
The good old days
Today's computing environment puts a lot of stress on IT administrators. Compliance means IT has to update systems as soon as patches are released. Web application architectures result more clusters that must all have exactly the same configurations -- no drifting allowed. Virtualization and dynamic provisioning have converted the infrastructure into a vast ocean of constant change. Meeting service level agreements requires visibility into how technologies interact, and how infrastructure changes affect those interactions.
The only way to get these daily to-do lists done is to minimize the effort required to access the information needed by IT administrators. That is what the CMDB is for -- a source of accurate information for this crazy world. However, a CMDB is not magic. It is a solution and like any other important business solution, it is prone to pitfalls and problems that can leach away the IT and business benefits. Let us look as some common pitfalls and how to avoid them.
The problem of the plus-sized SQL database
Many people start off assuming that CMDB data is nothing more than a list of assets and attributes; but nothing is further from the truth. The DB part of the name makes people think it's like Oracle or SQLServer -- a thing that can be bought and installed, and once the data is loaded takes care of itself. This assumption needs to be corrected at the beginning, because it can cascade into a whole set of other problems.
An SQL database is a implementation mechanism, it is not a complete CMDB solution.
IT data has lots of complex relationships. Even if companies start the project with a list of IT assets, they will have to start linking the assets in complex ways. Each asset has a wealth of information that is important to different people at different times. Storing an asset's entire upgrade history, financial information, performance profiles and configuration profile all in one place may not be the best idea. Additionally, information about firmware, patches, applications, updates, and so on change rapidly; therefore, a CMDB must have ways to manage change.
Education is the way to avoid this mistaken assumption. But I'm not talking about formal classroom instruction. I am referring to on-the-ground education -- talking to people and having a set of examples that will resonate with different audiences that need to be solution advocates. For example, using the analogy of pulling together information about a customer will resonate with business managers. Discuss how address, email histories, support history, buying patterns are all in different places but must be linked together to manage the customer better. Similarly discussions with application administrators should revolve around their need for a system to that keeps accurate configuration histories so it is easy to see what changed when a problem arises.
Failing to link the project to business needs
This pitfall can happen at different times for different reasons. For example, a hot new trend can refocus business executives during the planning phase, which can derail the project if it is viewed as only an "IT thing." This can also creep into projects during executive changes. In an effort to please new bosses, project planners start shifting the focus which allows scope creep if the initial business value is not clear. This can also happen when IT is trying to plan a new implementation phase such as expanding use to another application or to include network data. The next phase can stall because the business has not clearly seen the early benefits because the IT-business links were not clearly defined for the first phase of the project.
The problem with having weak links to the business is that different advocates end up with wild expectations, which are fuelled by news articles or vendor case-studies and not by the reality of the particular enterprise situation. The business may also underfund the first project or lack the strong commitment needed to continue funding even during economic slowdowns. This strong commitment is also required in the long run as the project will eventually cut across departmental boundaries.
Avoiding these problems means figuring out who the business advocates are, what their hot-button issues are, and how to measure and report success in terms they will understand. For example, if the business need is to reduce the number of performance problems, then the CMDB project must report a decrease in the number of problems in order to demonstrate its business value. In addition, project planners must also spend some time thinking about what to do when those advocates have moved on, whether through firing, promotions, or retirement. There must be some level of succession planning so that it is easy for the replacement to decide to continue the project without scope creep.
Failing to manage how IT data changes
This tends to happen as planners and project architects get into the weeds of designing IT meta-data structures and initially populating the solution with data about thousands of systems, devices and relationships. It is easy to forget that the data has to change and that data changes must be controlled for the solution to be accurate and complete enough to be usable beyond the first week or two.
This failure can also creep in as IT administrators are using the solution. If data change, management processes have weak controls that can be ignored. For example, system administrators are told to use ABC tool to make changes to Linux servers because that tool automatically updates the CMDB. However, if they can still log into the servers to make manual changes, then those ad-hoc changes may never make it into the CMDB. When this happens, IT has lost the value of an accurate information source because IT information is wrong or incomplete staff cannot arrive at the correct conclusions or decisions. No one will use the CDMB once people find themselves wasting enormous amounts of time validating the data in the solution.
To avoid this problem, project designers must determine not only where the CMDB will get its data from (so that it can automate collection, federation, and reconciliation), but also how that data changes. There can be many sources of data changes. People interacting with the solution, automated provisioning policies, and automated power saving policies are just a few examples. IT must find noninvasive ways of discovering actual changes and controlling them. These are important in part because people do not like straightjackets, but also because it is the only way the system will be able to keep up with the pace of infrastructure change, especially in large enterprises.
Ignoring the people using the information
Many technology enthusiasts believe that if you build it, people will figure out how to use it. This is not true for IT management solutions. Only people with time and/or passion will play with a new system long enough to figure it out. Most IT professionals are too harried to have either time or passion. For IT staff to adopt a new solution, it must slide smoothly into how they actually work right now. Once those current tasks get easier, they will have more time to figure out what else it can do for them.
Another problem related to this occurs after the solution is adopted. Most users want to tweak the solution once they have used it for some time. The grumbling starts if the CMDB designers do not build some user flexibility into the solution. Consider Salesforce.com, the reason many people like the solution is that they can tweak it as they use it. They can create their own reports and position the data input fields as they like. Basically the solution feels like it is customized for each user, but the integrity of the information system is maintained.
To avoid this pitfall means internalizing the fact that people don't want a CMDB, they really want access to information to do their jobs on their terms. Delivering access to information requires understanding what people look up, what level of detail they need, when they do it, what info they need to share, and what they need to hand-off to other people in the process. It also requires standard interfaces that get people started quickly, but are also flexible enough to personalize without jeopardizing data integrity.
Assuming that there will be only one CMDB technology
This assumption is the natural follow-up to the idea that a CMDB is a big data store. However, it is usually incorrect before the CMDB project gets started. Most enterprises already have multiple CMDB technologies simply because most enterprises use different solutions to manage different technology silos. They always have and they always will, that is not going to change.
What has changed is that every IT management vendor is using CMDB technology as part of their solution strategies. The lesson the vendors learned from their earlier adventures in management frameworks is that application integration without data integration and IT processes does not work. Therefore, this time around they are using CMDB technology to implement the necessary data integration across their product lines. This means that when companies upgrade their network, database, or server management solutions they will have a version of that vendor's CMDB technology embedded in the solution.
CMDB diversity also occurs when different IT departments take charge and implement a small scale CMDB for their specific purposes. For example, the network management team, in dire need of faster device changes implements a CMDB-based solution to automate policy-based changes while the Web application manager, tired of getting blank stares when asking "what changed" when troubleshooting, deploys an application monitoring solution that provides "a timeline of configuration changes." These different solutions will have to be integrated somehow with the enterprise-wide CMDB effort, which creates potential problems. Enterprises end up being an integrator of multiple CMDBs, which is expensive to implement and maintain. Alternatively, IT departments get locked into unproductive political battles over which CMDB technology is the best one, and no one really wins those battles.
The best way to avoid this is to have an IT-wide, standards-based CMDB architecture that all parties agree to from the beginning. Large-scale, enterprise-wide SOA projects have reference architectures, so why should this large-scale IT information system have one as well? With a reference architecture in place it becomes easier to demand that management solution vendors provide integration adaptors that streamline and simplify interaction between multiple CMDB technologies. Even with every vendor's commitment to adhere to basic Web services integration standards, cross-vendor integration will not happen organically. It happens because there is strong customer demand.
Designed to win
There is no getting around the fact that IT needs a better information system to support their businesses. So it's in everyone's best interest to avoid these pitfalls. Most of these pitfalls start off as mistaken assumptions: it is an IT only project with no business implications; it is a big repository of asset data that staff will figure out how to use on their own; people interact with data in only one way all the time; or that there will be only one right implementation. These sorts of assumptions lead to disastrous or unusable implementations.
Avoiding these problems involves spending some time thinking about where IT data lives, how people actually work, the use cases that can deliver some early wins, and the politics of adoption. Only by considering these issues can IT design a solution capable of delivering measureable benefit in the short term and flexible enough to expand and incorporate new infrastructure, processes, use cases and bu