Making your security fit

There is no doubt that network security keeps IT directors awake at night. And it doesn't look like restful slumber is getting any closer. When the British Computer Society surveyed IT directors in May, it found security was the main concern for 61% of respondents.

In October, the Association of Technology Staffing Companies reported that security consultants' salaries were up by 21% from £37,000 to £45,000 year-on-year, although IT salaries in general had risen by a mere 3% since April.

Every enterprise needs security know-how, so there is no avoiding having to spend part of your budget on products and services. But thinking through the key issues first will help you get better value for money.

First of all be clear about your aim, which in this case is to secure your network so intruders cannot get in. Testing is essential. "You have to beat your systems with a stick," said Jay Heiser, research vice-president at analyst firm Gartner. "And that stick can be wielded by a person, or a machine can do it."

Automated and manual methods of network testing both have pros and cons, and you need to consider which mixture of the two is right for your business. A good starting point is to carry out a systems audit to determine what is important to your organisation so that security resources can be targeted correctly.

Andrew Kellett, senior research analyst at Butler Group, said, "It is important to do the up-front work. You have to decide what is important to the organisation, know what you need to do to be protected and the levels of protection you want to apply. You have finite resources and monitoring everything will see you drown under stacks of reports. Do not try to do everything or you will end up doing nothing."

It is also important to consider the taxonomy of testing because not all testing is done for the same reasons and in the same way. For example, when new applications are added to corporate systems they need to be proven against intrusion. This is a prime time to use a specialised penetration testing service. On the other hand, large corporate systems need to be constantly tested for threats from outside, or malicious code working internally, so in-situ tools running to schedules are also needed.

Phil Cracknell, chief technology officer at IT security supplier Netsurity, said, "I like to think of defence in three layers: preventive, detective and monitoring. Most products or services fall into these areas. Penetration tests are very much an after-installation test routine to see if there are holes or vulnerabilities. Then the process can be automated and conducted on a regular basis, reporting only exceptions."

In other words, there is a continuum from automated to manual approaches that encompasses several products and services. There are those that are installed and work constantly or on regular schedules such as intrusion detection system (IDS) sensors, virus scanning software and firewalls. Then there are scanning tools which, once set in motion by testing teams, work their way through the network looking for known weaknesses. Finally, there is manual penetration testing where testers use the methods of hackers to find and fix holes before it is done from outside the system.

Automated forms of testing such as in-situ devices and software, or on-demand scanning tools, have advantages that manual methods lack. A key advantage for automated methods is the ability to deal with the sheer scale of large systems, said Heiser.

"It is the only way to ensure something the size of a corporate body has a well-understood security posture," he said. But there are limitations. "An automated process will tell you if an attacker can get in to your web servers. It will not tell you how far in to your systems they can get. You need a manual process to do that."

In-situ products that operate constantly or are scheduled include virus scanners, IDS and intrusion prevention systems (IPS). IDS and virus scanners have similarities in that they attempt to identify any malicious presence on corporate systems by reference to a regularly updated database of viruses, or patterns of known behaviour used to attack systems.

These patterns include such behaviours as repeated attempts by a suspicious network packet to gain access via a particular port, or attempts to export an e-mail address book. IDS tend to be outward-facing, but they are also increasingly being used to counter internal security threats in large corporations with thousands of employees.

Of course, both virus scanners and IDS are dependent on them "knowing" what to look for, so in the first few hours or days of the release of a virus or discovery of a software vulnerability they can let an intruder pass unnoticed.

An IPS is essentially a detection system with the ability to block the access of those found behaving in ways it considers suspicious. This often means that legitimate traffic can be deemed malicious and the user logged off - a feature that has given rise to a divergence of opinion, said Cracknell.

"If an IPS creates a false positive it will result in a real user being denied access to something and this soon becomes unacceptable. I think this is the reason why IDS and IPS are such contentious topics with network managers - they either love it or hate it," he said.

Virus scanning, intrusion detection and prevention systems will sit on the network working to a schedule, but there are also automated script tools you can run to check networks for known vulnerabilities. Tools such as NMap, Nessus and ISS Scanner can identify targets, scan for weaknesses according to known vulnerabilities and test which of them could be exploited.

There is little doubt automated methods are the best way to test for the existence of known vulnerabilities. However, when it comes to testing for unknown vulnerabilities and determining how important they are, manual testing comes into its own. There are, however, important deficiencies to be aware of - and the biggest of these is the involvement of the human factor.

"There is huge debate about penetration testing," said Heiser. "It can provide useful results, but it is not proved that it is the optimum method. The argument goes that if you hire consultants to find things they will be more likely to find something to show that you have spent your money well. But it very much depends on the skills of the tester, and if a determined person wants to get into your system it is always possible to find a way in."

Where automated detection systems will go on scouring your network for as long as you want them to, humans do not possess this attribute. Richard Brain, technical director at automated penetration testing firm Procheckup, warned of the boredom factor in so-called "tiger teams". "Humans can get bored or are just not as rigorous as automated methods. If you find a few vulnerabilities there is the chance you will stop at that," he said.

It is clear then that manual and automated forms of testing have their strengths. The question is, how do you combine and use them to best effect?

Cracknell recommended an in-depth defence approach. "Our industry is littered with a 'belt and braces' mentality, and it is correct. Nothing is completely accurate, and so as many viewpoints as you can provide will help build the bigger picture. Use a tool yourself, have a regular scheduled scan of your network, and every so often bring in a team to attack the job from the inside and the outside," he said.

Remote penetration testing can raise issues that internal in-house tests using a testing tool can miss, and vice versa, said Cracknell. He warned users to be sure that if complementing a testing service with a testing product, the right things are compared. "Testing systems from two 'sides' will inevitably show two different machines, the public facing one and the internal one," he said.

There are further dilemmas to consider when entrusting testing to external teams. Should they work on the basis of zero-knowledge or full disclosure? In other words, do you give the plans and diagrams of your networks to the testers, or see what they can discover? Not providing this information gives a more accurate depiction of the situation facing an external attacker. On the other hand, you are paying for their services, and providing knowledge of the network topography which can give a clue to possible weak points may save many hours of work.

Then there is the issue of the extent of involvement of your people in the testing. Cracknell thinks it is good to keep a distance between the two. "Who does the testing?Policy should define that it is not the person who built or maintains the systems, as they are invariably too close to the issues to see them. Sometimes the testing can measure your organisation's response to an attack, and if you choose not to tell your operations team when testing will take place you get two major benefits: they do not harden the defences in advance, which is a natural response if they built it, and you get to see how they react to the incoming attacks."

At the end of the day there are problems with all testing methods. In-situ devices and software are only as good as their last update of viruses or attack strings. Testing teams using automated network scanners often produce many false positives, reporting vulnerabilities even when they are not present. Fully manual testing is only as good as the person conducting the tests, so consistency and accuracy is difficult to achieve.

Which combination of automated and manual methods you opt for will depend on how critical your systems are and the time and budget you have to spend. But whatever you opt for, remember the weaknesses of each can be minimised by selecting the best mix.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in November 2005

 

COMMENTS powered by Disqus  //  Commenting policy