The past 40 years have witnessed a steady transformation in the way computers are used and who has ultimate control over IT.
In the mainframe era, when companies ran a single large computer, the data processing manager was king of the hill. But as technology advanced, so did the number of computers used. And each time the number of computers increased, IT lost some control.
In the desktop PC explosion of the 1990s, analysts warned of the huge costs of administering IT to end-users running PCs. Their answer was simple: IT departments needed to bring back old-school command and control from the mainframe era.
This managed desktop environment, which remains to this day, has allowed a central IT team to control the installation of anti-virus software, applications and user log-ins, and fix problems remotely.
It has worked for a while, but the times they are a-changin'.
The global nature of business means users often need to share information on an ad hoc basis with external partners, making the controlled access to IT systems that is the norm inappropriate and expensive to administer.
Thanks to wikis, blogs and websites that host image, audio and video, it is possible for end-users to build feature-rich websites without any involvement from IT. It is even possible for a tech-savvy user to pull together their own applications using the idea of mash-ups to integrate tools such as Google Earth.
This is the future of IT. It is important to embrace these developments, rather than lock them out. But such flexibility comes at a cost, and IT directors will need to ensure that the new end-user freedom is not a burden on an already overstretched IT department.Comment on this article: [email protected]