Computer security, and e-security in particular, is one of the hottest IT topics at the moment. Security breaches at several high-profile companies within the UK - that have almost certainly been mirrored world wide - has moved the issue onto the big stage.
As a result, Butler Group firmly believes that e-security is the IT community's number one concern, and as such represents the greatest threat to the future growth of e-trading. When discussing this issue, it has to be said, that IT security now has a need to be all-inclusive. But it isn't and, realistically, security never was just an optional extra. From a business perspective it has been a factor within IT ever since the first air-conditioned data centres were created with perimeter based access controls.
However, this was just stage one in the battle that has become a full-blown war, and as computing has developed, IT security has become a much more complex issue.
Today the range of points from where users can access computer systems makes the question about who needs to be protected into a very simple one to answer. Every computer user and every computer system needs to have good quality security in place. Many systems are accessible from the Internet, which makes them especially vulnerable to viruses and similar malicious attacks. Networked systems that allow access from the Internet, extranets and intranets can be vulnerable to a whole range of violations, and therefore need to be adequately protected.
Commercial systems where high levels of customer access are critical to the success of their operation, such as Internet service providers (ISPs), are vulnerable to a range of security violations, with denial of service attacks proving to be the most high profile issue.
It is still surprisingly easy to conduct a denial of service attack. The attacker simply needs to uncover and exploit a weakness in the target's armour, effectively engaging in a form of guerrilla warfare. A hacker will then send off so many requests to a server in an attempt to flood it, thus making it impossible for other users to establish a connection. The attacked server is then blocked to any other visitors because it is far too busy trying respond to the wave of requests put forward by the attacker. With the server having no time to handle the incoming requests, it will become vulnerable to crashes, which have the potential to bring the system down and any sites that it is hosting.
To say that the threat from virus attacks is one of the most difficult security problems to face the IT community, is really to understate the potential problem. Virus infection cannot be simply dealt with as a single issue, it is such a wide ranging subject, and as such the different forms and mutations that virus attacks can take is a complete subject. Generally, viruses are spread when a rogue piece of software is executed. Often this is done by accident and usually because the virus presents itself as an innocent message or e-mail attachment that says to the user "I need to be read".
Once executed, viruses can attach themselves to other legitimate software or applications giving the effect of a parasitic leech that continues to replicate itself each time the legitimate software is used, enabling it to cross infect over systems and networks.
The spread of virus-infected systems and software is, of course, nothing new. Infected files have been passed between computer systems ever since it became possible to transfer information between machines. However, the use of the Internet has made the potential for sending and receiving new viruses much greater.
One positive aspect that comes out of the virus protection marketplace is the maturity and comprehensive nature of many of the anti-virus protection products, such as Norton and Dr Solomon. Such products provide good levels of protection, but are only effective if releases are kept up-to-date by users.
Biometrics, with its sophisticated use of fingerprint and retina scanning, voice recognition and its links to smartcard technology, could provide an alternative solution.
However, biometrics seems to have been an emerging technology, for an eternity. Perhaps because requires physical or behavioural characteristics to confirm authenticity, there has been a perceived reluctance on the behalf of the users to embrace the technology. Biometric systems, however, are already in place to support some IT security systems. To date most are located in high security areas, and penetration into the mass marketplace has not yet happened. But security is heading in this direction, and with the costs of support systems and associated equipment reducing all the time, there will be further justifications for the use of biometrics within mainstream security.
When administered properly, a combination of public and private key cryptography offers one of the most secure methods of protecting information, especially when it is transferred over public networks. But be aware, one of the key issues in making the best use of encryption and decryption technology, is to clearly identify which information needs to be protected. Some information is sensitive, for example key company performance details, or credit card details, and would clearly benefit from the use of robust security applications. Other information, that has little or no third-party value, would not benefit from the use of high-level security, and the use of encryption and decryption processing would prove detrimental, as by its very nature the technology has an overhead in terms of its effect on processing speeds.
Correctly utilising cryptographic techniques is a key issue, and accurately targeting its use towards sensitive and premium information will assist in supporting the efficient use of systems and their throughput of data.
For the majority of hackers, the issues are not commercial. For these people there is often little opportunity for financial gain but, for some, the ability to get inside a 'secure' system and leave their calling card is sufficient.
Unfortunately, the calling cards are rarely welcomed. Computer systems are not unique in suffering from what amounts to nothing more than pure vandalism, but there is a clear need to suppress the ability of individuals to meddle in areas in which they should have no access.
Constant vigilance and a robust security policy are still the best answers. The adoption of a robust IT security systems should remain the prime objective for any would-be Internet trader, whether dealing on a business-to-business or a business-to-consumer basis. Unfortunately, except for the new dotcom companies, where a greenfield development opportunity was available, security usually represents a patchwork quilt of technologies.
Although it is a well-known statistic, it's still worth pointing out that as much as 80% of all IT-related crime is committed by company insiders. Therefore, companies should place as much emphasis on internal safeguards as those employed outside the enterprise boundaries.
As part of any risk assessment there is a need to establish a value for each asset, and use this information when determining the levels of security required. However, always bearing in mind that in today's connected world there is an added imperative to examine the potential for large-scale damage caused by what might previously have been regarded as a minor fault.
Considering the options
Constant review is a pre-requisite of any security policy; and nothing can be cast in concrete. One option worthy of consideration is the use of system management tools to bolster the use of security policies. Products used in this context can address a specific problem area, while tools such as Tivoli, or Computer Associates' Unicenter TNG are more all-embracing.
It is worth drawing attention to the fact that many security problems are not the result of malicious intent, but the fact that applications continue to be used outside of the context for which they were originally designed. Thus forcing scalability beyond what was intended, causing fail and breakdown, and as a consequence presenting illegal access opportunities.
Unlike the days of yore, when an application could be released to a known community in a controlled manner, most Web applications are released into the wild, must hit the ground running, and must handle all-comers from day one.