Bank security chief explains how to avoid internal threats

Interview

Bank security chief explains how to avoid internal threats

Ron Condon
Most of our time now is spent on securing the organisation against accidents and mistakes by users.
John Meakin,
group head of information securityStandard Chartered Bank
What are your main areas of interest at the moment in security?
In the malware space it has been pretty boring. During the last two years, there have been no radically new external threats. What we perceived two years ago has continued - the interest of organized crime in blending together a number of potential exploits, mainly code exploits, to compromise the security of corporates and banks. Their targets are transactional systems, and they are also targeting proprietary information. Where are the new threats?
Most of our time now is spent on securing the organisation against accidents and mistakes by users. We are a large organisation and are spread geographically. We also have more fluid boundaries, with many different agents, suppliers and vendors having access to our systems. So we have to protect against the inadvertent leakage of information. Information is flowing freely through this organizational miasma with none of the old boundaries applying any more. So the information spends more time outside the traditional fortress application either physically or on the network. So it's no accident that the vendors see data leakage prevention as a major opportunity - and it explains why the likes of Vontu, PortAuthority and Tablus were snapped up by larger companies. But none of us have cured the problem yet, it is still a voyage of exploration. How are you tacking the problem?

At SCB, we have spent much of 2006/2007 finding out where the information goes in the organisation. Then during the second half of 2007 we made investments in some external vendors' products. We'll spend the next two years putting together a multi-part strategy trying to address this problem.

The tools start off by allowing us to discover sensitive information at the point of use across the organisation. Then we layer on top of that, an evolution of the same tools that allows the user to decide how information should be handled - for instance, should it be encrypted when sent from A to B. Then the third phase will be in 2009/2010, when hopefully the vendors will have provided us with integration between these sorts of tools and a credible business-wide (including the blurry boundaries) implementation of digital rights management. Whether it is the Microsoft version or someone else's is to be decided.

We are currently at the second stage of the DLP agenda.

It's not enough to have data encrypted on a laptop and or on a USB stick. The information can be exposed in a file share somewhere embedded in your network. So you have to be prepared to recognize the sensitive parts of your information wherever they are, and cryptographic protection is the answer. But it's not enough just to encrypt laptops, and not enough to say you've bought Vontu, Tablus, PortAuthority or Verdasys. It's all of those things together linked together in a coherent way with the users playing a vital role. The role of the user is critical.

The DLP vendors have had a hard time because they have tried to solve a problem for people like myself who haven't properly characterized the problem yet. So we saw Verdasys which started off looking at the workstation, while Vontu started at the other end looking at the network, and they have moved together. They didn't know what they were doing - but to be fair, nobody on the buying side knew what was wanted either.

For instance, one large bank I know spent a lot of money with a DLP vendor and they seem to have only turned on the product in monitoring mode. They are gathering mountains of logs and sending them to the central team, who are trying to decide what should or should not have happened. I don't think that's a very fruitful exercise.

Does DLP imply a long data classification programme?
Data classification is a great idea and part of 'Security 101' but no-one actually does it. It's too boring and too complicated. That's why the end-user is so important. He doesn't want to classify everything in advance. In fact, the user doesn't want to classify at all. If you put a piece of data in front of him, he can make a snap gut-feel judgment on whether it is sensitive or not. If you have a tool that spots data in a big pot called 'sensitive', it can flash up a question to the user at the point when he is about to use that data, and ask "OK, is this data sensitive and do you want to do X with it - send it via email, or write it to a USB stick, or print it?" [SCB uses Workshare Protect for this] To enable the tool to recognise the point when it should pop up with the message on the screen, you have to put in place some fairly broad classification in advance - but these are broad categories - you are not trying to be specific and tie it down to the last dollar's worth of value in that information. You are dealing with it on a very broad front. And believe me, if that gets any closer to a practical implementation of a classification or a value-driven protection of information at the point of use, it will be 100% better than what we have today. I understand risk management is a subject close to your heart. How do you do RM at Standard Chartered?
We have developed our own risk management methodology. We thought we could achieve the right balance between not being too complicated and too specific, like CRAMM. We wanted something we could implement in an automated system and use to answer specific policy questions. We wanted that tricky balance and couldn't find anything outside. We are now in the third generation of it, and use it to drive everything we do. The automated system drives all the decisions about what security to build into new or changed applications. The system also provides an authoritative record of decisions about business scenarios, and it underpins everything else we do. For example, we could have taken the approach of encrypting everything - and some organisations do that - but it is hellishly expensive and gets in the way of doing good business sometimes. I'd rather have a risk-driven approach for solving the information leakage problem. How are you handling remote and guest workers?

We used to take the view that we could impose a standard configuration that was easy to protect. But we now have come to a stage where the flexing of our business requires us to be able to move away from a model where all out business was done on our own PCs and servers with our own pre-defined, pre-installed standard build.

Home working and remote working have come of age, and for good business reasons. At SCB we are growing in markets such as India where it is hard to get fully-specified office space. So we use people working off our premises. If we put our own IT in there, it is more expensive per desk.

So you start to ask why do we have specify everything in advance? Why not change to an approach where you monitor for bad things and stop them happening if they start to happen. You then don't have to spend money upfront, and you can give people a more flexible computing environment - for instance, allow them to use their own home PC to carry out some of the bank's business. You can't do that and ignore the security implications, but if you carry out a monitoring approach, you can achieve the same end results in security terms.

You can use NAC, which is a form of monitoring, scanning the PC at the point of handshaking, although NAC is still high-cost. But business is more fluid and flexible so we need a model that is less about prevention to detection and reaction.

We are looking at three of four models that we could use, and we might end up implementing two or three of them. One model of particular interest uses virtualisation to limit the type of remote session that occurs when any PC connects to our banking systems. [SCB is using Neocleus for this]

The fact that we have gone from the discovery of sensitive information and we are going through to information rights management is very Jericho like - we have divorced security from the computers and networks that support the interaction our users have with the information.

How are you tackling identity and access management?

I don't spend my time trying to find a single method for proving identity, but access control is the sine qua non of system security. You can't be secure without doing this properly.

The problem is it is so complicated because of its scale. You have a problem that is as big as the number of staff accessing your systems multiplied by the number of applications they each access. So you can be talking about millions of access components.

Most of the identity management tools the vendors have given us over the last nine years are very good parts of the solution, but none of them is a solution to everything. If Oracle with its identity management suite had solved all the problems, I'd be throwing money at them.

On the other hand, more progress has been made in the last five years in getting the basics of access control right than in the previous 10/15 years. Part of that is down to Active Directory, which demonstrates Microsoft's power to do good in the security space, and which one of the enabling technologies to allow us to solve this very big problem.

By the way, it has been an interesting experience dragging HR from thinking they only manage employees of the bank. We now have to include what we call 'Non-employed workers' - contractors, agents etc. They all have access to our business so we had to persuade HR that they should be in the HR database. We had all sorts of smoke thrown up at first - what about the legal aspects of putting them on the HR database, for example? - but we got rid of those objections. One of the things we have had to do is build the relationship with HR and overcome those traditional mindsets about who works for you.

I believe we are close to solving the IAM problem.

Are the vendors giving you what you need? Are there any you admire?

Microsoft can be really good for security. Of any IT vendors, they have done the most good for security in the last five years, even though they had a lot of sins to atone for.

But they are not the most innovative. Smaller companies tend to be the more innovatve. For instance we are working with an Israeli company called Neocleus, which uses virtualisation in order to secure the endpoint.

We are also working with Worklight which provides a secure way of using Web 2.0 technologies. Using Facebook in your organisation without losing control of sensitive data - that's a good problem for a security guy to try and solve.

And for our information protection, we have gone for a company that is not one of the major vendors. We are using Workshare, which has been selling document management systems to law firms. It helps us do the information discovery and handles the interaction with the user in a particular way that we like.

One point I would make is that, because the total spend on security is still relatively small, small companies tend to get bought out quite early on. The market doesn't have the scale to allow small companies to grow before they get bought out. And once they are bought, innovation dips, and sometimes good products die in the process and that doesn't serve people like me well.


Email Alerts

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
 

COMMENTS powered by Disqus  //  Commenting policy