Three years ago Microsoft chairman Bill Gates sent an internal memo announcing a new focus for Microsoft. Security, privacy and reliability were to become the watchwords of the organisation.
The birth of Microsoft's Trusted Computing initiative was at first greeted with scepticism by IT departments and analysts. In their minds, Microsoft was responsible for the vulnerabilities that were leaving systems open to attack, and for the awkward patching regime that IT departments had no choice but to follow.
Three years on, the Trusted Computing initiative looks to be substantially more than a publicity exercise. Scott Charney, vice-president for the programme, said security, reliability and privacy are now firmly embedded into the fabric of the organisation. The initiative has changed the way Microsoft designs and develops its code, he added.
The most fundamental change is the creation of what Microsoft calls its 'security development lifecycle'. This means that security is considered in all new code development, from the initial concept through to final testing.
'We now build documented threat models, and as we architect the code, we test against the threat models,' said Charney.
Fewer critical vulnerabilities
The programme is making a demonstrable difference. Recent products, including Windows Server 2002 and the latest Exchange 2000 and SQL 2000 service packs, have seen a fall of 67% in the number of publicly reported serious or critical vulnerabilities, said Charney.
Microsoft is also developing ways to make it easier for IT departments to upgrade from less secure legacy systems. One answer is virtualisation, the technology that allows organisations to run legacy applications on the same hardware as recent versions of the software.
Virtualisation will give organisations the time to migrate their business applications to more secure operating systems, said Charney.
Patching is another area where users are beginning to see results. Microsoft has integrated its business units, reducing the number of patch installers that customers need from eight to two.
When combined with Microsoft's decision to release patches on the same day each month, these changes are making life easier for IT departments.
Patch sizes have been reduced by about 70%, said Charney, and patches are now reversible. This means that IT departments no longer have to worry that installing a patch may have unpredictable consequences that cannot be stopped, he said.
Each patch also registers with the operating system in a different way, making it possible for organisations to use scanning tools to check which patches they have in place.
These developments will make it easier for organisations to deploy patches quickly before hackers exploit vulnerabilities.
This is urgently required. In 2001 it took 331 days from the release of the patch to the appearance of the Nimda worm. In January 2003 Slammer appeared 180 days after the patch. In August the same year, the delay between patch and virus was down to 25 days. And most recently, the Whitty worm appeared only 48 hours after the patch.
In the longer term, Microsoft is working on tools that will help organisations focus their patch testing on the applications and functions that are most likely to be affected.
'The default state today is to test the patch but to be reluctant to deploy it just in case. The default state in the future will be to test the patch with an effective testing model and then deploy the patch. And if the unexpected happens, we can just roll it back and keep the business functioning while we are looking what the issue may be,' said Charney.
Microsoft's next priority will be to do for privacy what it is doing for security. This means incorporating features into products that will help organisations ensure that the personal data of their customers is not only kept secure from external hackers, but can only be accessed by those people within the organisation that are entitled to see it.
The next version of the SQL database, for instance, will contain built-in encryption. Combined with Microsoft's Active Directory, this will enable IT departments to segregate data, making it accessible only to those who have the right to view it.
Identity management is another priority, said Charney. Microsoft is working on ways to incorporate two-factor authentication into its products, while at the same time trying to ensure that the log-on process is kept as simple as possible for end-users.
If the Trusted Computing initiative is to work, it cannot remain solely a Microsoft initiative. It needs to involve other suppliers, from hardware manufactures to application developers.
Microsoft is already working in partnership with AMD, Intel, Dell and other software suppliers to agree standards for the hardware and software needed for the next generation of secure PCs, which will incorporate digital rights management technology.
But, said Charney, governments also need to be involved, particularly to give backing to long-term security research that the suppliers cannot fund.
'One of the gaps that governments could practically fill is doing the basic long-term R&D in security and how to build survivable distributed systems,' he said.
- Trusted Computing will be the basis for a user's decision to trust a system
- Users can expect that systems are resilient to attack, and that the confidentiality, integrity, and availability of the system and its data are protected
- A user is able to control data about themselves, and those using such data adhere to fair information principles
- Users can depend on the product to fulfil its functions when required to do so
- The supplier of a product behaves in a responsive and responsible manner.
This was first published in April 2005