The hardware and software costs of a PC represent a small portion of the total cost of ownership (TCO). What contributes to hidden costs and what can companies do to reduce them?
The root of the problem lies, clearly, in allowing the end user the flexibility to make these changes to their PC configuration. Tackling this problem will lead to immediate costs benefits as the user no longer has such interesting work-displacement activity as downloading and installing a new screen saver or changing their desktop layout.
The aim should be to create as standardised an environment as possible. This means that support staff do not have to spend time working out where to find things, they know how the applications software is configured and they will build up a repertoire of fast fixes which they can apply across the organisation rather than having to work things out from scratch time and time again.
The first step in this process is to actually find out what hardware and software is installed out there on their network. The process of network development in many organisations has been unplanned to a greater or lesser extent, and many organisations do not actually have a complete idea of what devices are connected to their servers. Indeed experience has shown that some large organisations do not even know how many servers they have installed, let alone the number of workstations and other devices.
Building up an inventory of hardware and software need not be an especially painful task. There are numerous software products available which can create an asset management database automatically. McAfee/Saber's SiteInventory and Computer Associate's AimIT are two examples of software which can automatically be used to gather inventory data across an enterprise network. This software builds a database of hardware configuration information listing such details as hard disk capacity, processor type, installed memory, screen type and so on. This information provides a high degree of value to an organisation. For example it means that support personnel can find out what the hardware configuration of a machine is without having to physically look at the machine (or without asking the end user to find out and supply the information). It makes the analysis of hardware problems easier and means that it is possible to discover if machines are under or over specified for their tasks. Finally it can track changes to the hardware, so that if memory on a PC has suddenly decreased then either you have a chip failure or someone has opened the machine and walked off with the memory (and this does happen!).
Creating a software inventory is just as vital. Not only does the inventory tell you what software is installed on each machine, it will also list the versions. This makes the management of software licences an easier task, and may provide cost savings in doing so. It also means that steps can be taken to ensure that users do not install unauthorised software or that upgrades are not performed on an ad hoc basis by the end users. If all users are on the same version of an application, such as Word or Excel, then support personnel can be geared up accordingly.
Creating the inventory is a first step, but once it is in place it needs to be kept updated and any changes to it flagged. Any changes which are not authorised can then be acted upon, ensuring that compliance to company standards is constantly enforced. However, despite the importance of this procedure, it remains a reactive process - the end user makes an unauthorised change and therefore you flag it and act upon it afterwards. Are there more active steps that can be taken to ensure end user compliance?
Many operating systems, including Windows 9x and Windows NT, provide network administrators with the tools to restrict the range of activities that end users can perform. Common activities such as changing the desktop wallpaper or colour scheme can be disabled, removing common time wasting activities. More importantly it is possible to restrict access to the configuration options, such as the various 'control panel' applications within Windows. It is even possible to control the operating system such that only applications on an approved list can be executed. Obviously where such a decision is taken it is important to restrict access to such items as the DOS command line and the 'Run...' option on the Start menu. All of these options are available for Windows 9x, (via the policy editor program poledit, which is well hidden on the Windows 9x CDs), as well as the more security conscious Windows NT operating system.
An alternative to configuring the operating system in the manner just described, (which may be difficult to administer in a heterogeneous network running different desktop operating systems), is to use third party software to enforce standards. Products such as Reflex's Disknet Data Security for Windows NT and Computer Associates AimIT can provide very high levels of control. The latter product for example, can track changes to the configuration and version of all software on machines connected to the enterprise network. Where it detects unauthorised versions of software or where the configuration is sufficiently changed, it will automatically de-install the offending article and install the standard version/configuration.
Automated distribution of software across the network goes hand in hand with this type of standards enforcement. Rolling out software piece-meal is both time consuming and means that for extended periods of time there may be a number of different versions of software and or different applications to support. Automatic distribution of software ensures that there is a rapid deployment of software upgrades and new installs, and ensures that a standard configuration is used across the network.
In theory this sounds like a relatively straight-forward task. The install package is loaded onto a server and when a user logs on for the first time the install program is invoked and the process proceeds smoothly. In practice the task is a good deal more complicated. Software is increasingly complex and there are numerous dependencies on operating system files such as DLL's and exe's files and so on. In an environment where end users, or indeed support personnel, have installed different applications and versions of software, there may be different versions of these DLL files on different machines, some of which may not be compatible with the software you wish to automatically install. This problem is particularly prevalent with the different flavours of the Windows operating system.
Therefore automated software installation packages must be sufficiently complex that they can cater for this kind of eventuality. However, as should be clear by now, an environment where standards compliance is enforced reduces this problem considerably. It should also be clear that such automated installation procedures should be thoroughly tested before being deployed across the network.
Ensuring a standard environment will reduce the scope for end users to play around with computers rather than to perform their job functions. But obviously there is a cost to pay in that there needs to be an investment in the software and support infrastructure to create a standard environment. And, of course, standard environments can only stay standard by being subject to a high degree of control. It only takes an end user to bring in a CD from a computer magazine, or a floppy disk from home to, deviate from the standard. Apart from having an acceptable use policy in force which expressly forbids this kind of activity, what else can a company do?
Firstly access to hardware can be restricted. Even where machines have floppy disk and/or CD drives, they can be disabled in software. Some software, such as Reflex's Disknet Data Security ensures that access to removable storage devices is strictly controlled. It even has the capability of 'branding' a company's diskettes such that they are unreadable on external machines.
Secondly a more radical alternative is to move to using thin-clients rather than fully functional PCs. Thin-clients are workstations with no local storage and which run directly from the network. They function much like the dumb terminals attached to main-frame computers of yesteryear. The main difference is that thin clients usually feature good quality colour monitors and on-board memory and processing power.
The advantages of thin-clients are numerous. They can only run standard software from the network; there is little room for the end user to configure the system; they are centrally controlled; the end user cannot install non-standard software. And, of course, thin-clients are low cost machines compared to fully functional PCs. However, where you already have a population of PCs, migrating to thin-clients may not be an immediate option. It is, though, an option to explore and in the long term the cost benefits may make this the best option.
Finally, having inventoried and secured the PC, created the standard environment, reduced the time the end-user spends tweaking their computer and improved the efficiency of dedicated support staff, is there anything else to do? The answer is, sadly, yes. Granting internet access to all users is a mixed blessing. Not only does surfing the internet represent a very tempting alternative to working, it also eats into the band-width available to those personnel who are using it as part of their job function. Again this incurs an appreciable cost, both in time spent not performing work and also extra time spent on the internet for those who need it because of low band-width availability.
The internet also makes new software, and upgrades to existing software, easily available. Upgrading may only be a mouse click away, representing yet another source of non-standard software. Is therefore advisable to either restrict internet access to those employees who require it explicitly as part of their job function - and to make sure that these people are aware of your organisation's acceptable use policy, which, amongst other things, should forbid software file downloads. Alternatively, software needs to be put in place which means that FTP access, (the file download protocol used with the internet), is restricted even if web access is not.
The aim, remember, is to reduce the cost of ownership of PCs. Even if an organisation does not go for the optimal solution of switching to thin clients, the range of policies detailed here will turn a PC into a 'virtual' thin-client. The latest figures from American IT researcher Datapro, show that switching to thin clients can save an organisation around 80% in support costs, and that they have a lower cost of ownership of around 25-35%. These cost figures are compelling enough reasons to address configuration management as a matter of urgency.