Recently in Security Management Category

They've all lost their mojo

| More
I'm currently recruiting a Security Director to replace me as I move on to pastures new. I must admit to being wholly underwhelmed by many of the CVs that have come my way and also rather upset by the number of applicants currently out of work. Anyone who thinks information security is a recession proof career is wrong because around half of the CVs received are from individuals made redundant from their previous jobs.

The other disappointing thing is the number of people I'm seeing who are great at writing policy and delegating jobs to third parties but have lost the hands-on technical skills (if they ever had them). From my perspective, the ability to read and interpret a network scan, review an architecture design or read a log file, identify the important issues (as opposed to the trivial), and describe why the issues are important and the work that needs to be done to fix them is bread and butter stuff. Not only that, but it's the fun part of the job - it's the bit we should all really want to be doing! Writing a policy document is important, but it's hardly something to be proud of being able to do. Bring me candidates who still have some security mojo!

PCI at the House of Representatives

| 1 TrackBack
| More
From Computerworld.

At a U.S. House of Representatives hearing yesterday, federal lawmakers and representatives of the retail industry challenged the effectiveness of the PCI rules, which are formally known as the Payment Card Industry Data Security Standard (PCI DSS). They claimed that the standard, which was created by the major credit card companies for use by all organizations that accept credit and debit card transactions, is overly complex and has done little to stop payment card data thefts and fraud.

I disagree that the standard is overly complex - in fact most of it is straightforward, common sense information security. The reason it has proved to be ineffective is because organisations focus on ticking the compliance boxes rather than taking the holistic approach to security that's needed. There's enough ranting on this subject elsewhere - the best being on Anton Chuvakin's blog - and I have little to add.

Ranums's Rant - Risk Takers and Security Disasters

| More
An interesting rant on Information Security from Marcus Ranum online here. I picked up on the following quote:

The security team explained why it was a bad idea; in fact they wrote a brilliantly clear, incisive report that definitively framed the problem. So the executive asked the web design team, who declared it a great idea and "highly do-able" and implemented a prototype. Months later, the "whiners" in the security team were presented with a fait accompli in the form of "we're ready to go live with this, would you like to review the security?

Sounds familiar. However, the important point is that security (or lack of it) is not, and should not be, the sole deciding factor in determining whether or not something gets done. The point is that the risks are known and reported. Management can then use it as a factor in their decision making process. If security was the sole deciding factor then the business would have collapsed a long time ago and we'd all still be using type-writers and chinagraphs.

Good leaders take risks. We'd like assurance that they are actually balancing the risks and benefits before making a decision rather than just running on gut instinct but sometimes they will make a wrong decision and security factors might sometimes mean the project fails or suffers an incident. However, without risk takers, there would be no innovation and no business growth.

It can sometimes be infuriating to report on risks and see them being taken anyway, but so long as you have identified what those risks are and reported them appropriately then it's job done.

Security, scale and functionality - Part 3: Functionality

| More
I love system functionality, it's a great thing. It brings a rich and dynamic user experience or empowerment through seamless processes to get things done. Whether it be business functionality or technical functionality, we now have more system functionality at our finger tips than we've ever had.

In the 1980's BT offered a dial in bulletin board type service known as Prestel. It was a simple service based on Viewdata technology with a private electronic mail capability and was my first experience of being in an online networked environment. Prestel hosted the UK's first home online banking service (Homelink). It was a basic service to say the least. You could view your account balance and pay specific utility bills but that was about it. At the time it was revolutionary, but today limiting your service to these features would be archaic.

Lets jump back to today when Internet banking is the norm and the range of banking functionality is enormous (I can wire transfer money anywhere in the World from anywhere in the World without delay, interruption and at minimal cost) and it starts to become obvious that retail banks, for a number of reasons, have dismantled their internal processes and controls and have brought them out of the back office and have put them right into every living room, coffee shop and street corner (a scale issue).

There's no doubt that there are real rewards with giving you access to all this extra functionality and I'm a real fan of Internet banking myself. However, the problem comes when/if your account becomes compromised. Back in 1984 I really wasn't bothered as Homelink really wouldn't let you do that much therefore the damage was limited and the impact minimal. In today's environment all bets are off and your account easily plundered or your Visa charged before you know what's hit you.

And therein lies the rub. The more functionality we provide in information systems the more opportunity we create for nefarious or malicious activity to flourish. Add in the scale dimension and the problems start to be compounded even further. When I started this thread I argued that you could only ever have two elements from scale, functionality, and security and I still believe this to be true.

If this is the case how can we approach the security element in order to bring this vital element into the picture? If we can't provide the right amount of security into highly scaled, functionally rich systems then how can the general public trust and embrace them? People will readily adopt new systems when they see a clear reward for changing their behaviour but will not go past the point where the risk outweighs the attached rewards.

My last post in this series will look at what we really mean, and can expect of security in this environment and what we as information security professionals and our Boards of Directors need to clearly understand about these realities.

(Postscript: Thinking about my first adventures with Prestel has brought back loads of memories. For those of us in the UK this cutting edge service significantly changed the IT security legal landscape. After Prince Philip's Prestel e-mail account was compromised the Computer Misuse Act 1990 was introduced and I guess things have gone downhill ever since ;-)

Top 5 information security annoyances - #2

| 1 Comment
| More
Few of my blogs have generated so much venom to be thrown in my direction than this one from last week. One blogger from America has gone so far as to write two very lengthy pieces in response while the highly respected security guru and fellow blogger, David Lacey, referred to it as being drivel. Another public commentor calls it trite.

I was well aware that my remarks about the usefulness of security awareness programs and risk models in particular would raise some eyebrows. However, I welcome the debate: we shouldn't be shy to challenge the accepted norms because there's plenty of evidence around that they frequently don't work.

Trite or drivel it might be....I actually started off with a list of ten!

Laptop with personal data stolen

| 2 TrackBacks
| More
Another third party vendor failing to implement decent security around sensitive data.

You've got to check out your vendors! The vendor might be at fault, but it's your data, and your liability.

BBC, BotNets and legal hacking

| More
On Monday I remarked on the BBC Click botnet investigation. I slightly regret my post because, in fact, I think they did a great job in bringing to life the potency of botnets. Legalities aside, let's focus on the fact that it only took 60 PCs to cause a denial of service situation. That's very disturbing and we all need to sit up and consider the consequences of that.

I was chatting with the CISO of an investment bank earlier today. He was wondering whether or not we should have in place a legal framework that would allow "researchers" a better way to test system security without fear of being accused under the Computer Misuse Act. It's dangerous territory but I take his point. If somebody discovered a gaping hole in my own organisations' network security then I'd be grateful for the information. Many of the third parties I legitimately employ to do discovery work do little more than run Nessus and then post a report, so the hackers view would be invaluable. But where do you draw the line between "hacking" and "research" and what assurance can be gained from an unsolicited security report?

Top 5 Information Security Annoyances

| More
I'm generally a tolerant and easy going sort of person. There's a fairly short list of things that get my goat. For instance, our local doctors surgery has a call queuing system with 6 different options. However, I know for a fact that there's only one person working in the reception and that regardless of which button you press (press 1 for a long wait etc etc) after about 12 rings you'll get through to her. How pointless is that? Our home phone number is fairly similar to a local pizza take-away place. At least a few times a week I'll take orders. Sometimes, I tell the caller that I've got no pizza left but they are welcome to have some of what I'm having...

The information security industry can be similarly infuriating at times. Here's a short paragraph on each of the five things that wind me up the most:

1. Security awareness programs

A whole cottage industry of consultants and websites has been built up around the perceived need to educate company employees about information security. It's all a waste of time and money. Certain individuals will point to a reduction in the number of lost laptops as a measure of success, or an increase in the number of people who can correctly click "a). All policies are on the Intranet" in a multiple choice questionnaire. The fact is that security awareness programs are received within the organisation with about as much enthusiasm as a plate of sick. The key to good information security is strong governance, good communication and well managed, decent processes.  Security awareness programs sap energy and resources, and have little positive effect. Drop them.

2. Compliance = security

Nothing reduces the security status of a business faster than a blind determination to achieve compliance with a policy or a new regulation. PCI is the obvious one, with whole armies of "Qualified Security Assessors" driving their company Mondeos up and down the motorways of England en-route to the next company that's fallen into the trap of believing that so long as they can tick all the right boxes they are protected as if covered by some magic force-field. Having the certificate that says "compliance" tells you nothing more than that on the day the assessor came, you had the right combination of smoke and mirrors in place to pass the test. Information security is not a compliance project. It's an ongoing program without an end-date.

3. Risk modelling

Many "experts" preach the importance of working through risk models. It's a load of tosh. No matter which way you try to do it, you'll always come out with the answer you first thought of.  You might as well use a crystal ball and read tarot cards. Nobody needs to work through a complex risk model to understand that if a retail website suffers a denial of service that it'll have some financial consequences, or  that if the internet connection is lost that there wont be  access to I've got better, more constructive and practical ways to spend my day than conspiring over risk models. Much more relevant is threat modelling - understand your systems and know the business so that you can make relevant risk-based decisions. 

4. Where are all the analysts

Here are two examples of what I mean. i) A vendor does a "penetration test" of a web site (I use the term loosely because these days most pen tests seem to involve little more than pressing a button labelled "scan now") and sends you a report highlighting several "High Risk" issues. On closer inspection you discover that most of them are either false positives, or irrelevant. ii) A network scan report is given to a newly CISSP qualified security analyst and he's asked to review it as part of a job interview. He spots the obvious highlighted security holes but doesn't question why a web server has non-standard ports open. Are we becoming too reliant on auto-scan reports? Security analysts need to be inquisitive, well practiced in basic technical skills, able to spot anomolies, and not afraid to question things that don't look right. The scan results never tell the full story!

5. It's not my fault

The first 9 years of my working life were spent in the Royal Air Force where I enlisted as an Assistant Air Traffic Controller. I learnt a few useful things out of that: how to make a good cup of tea, write backwards on a plastic board with a chinagraph , and operation of a mumeter, being three that will stick in my mind. Most important of all though was the lesson that excuses don't count. Only results. There's an evolving culture of making excuses for poor results. Lack of time, lack of resource, lack of training, lack of ability to think of a decent answer. All it shows is a lack of imagination, planning and initiative on the part of the person making the excuse. If you get something wrong it's your fault. Admit it and learn from it.

Council staff face Facebook ban

| More

Hampshire County Council is threatening to block staff from using the social networking site Facebook.

Bosses said they noticed an increase in use and during monitoring 46 employees were found to have regularly spent more than an hour on the site each day.


I find stories like this infuriating. If you don't want employees to use social networking sites then block access to them. If staff are spending "too long" then in my opinion that says more about the quality of line management and supervision than it does about the individuals using the service.

Susie Squire, from the Taxpayers' Alliance, said: "Ultimately these people work for taxpayers in Hampshire.

"There is no way they should be spending this time when they are supposed to be doing their jobs on a social networking site."

Get a life Susie. What's the difference between somebody spending a bit of time on Facebook, making a personal call, gossiping with the folks on the second floor for half an hour, or going outside every 30 minutes for a cigarette break? Employers need to grow up and stop trying to control how their staff interact with their friends.

Cllr Thornber added: "We have seen an increasing trend of the use of social networks.

"We are monitoring it carefully and if we feel there is any abuse we will block the use of that individual and they could be disciplined.

I wonder what criteria will be decided to determine whether the system has been abused. The usage stats can be misleading because people will browse to the site, then leave it running in the background on their computers. Reports on usage will show hours of use and hundreds of page views when in fact there might have been very little user interaction.

The main stat I look at these days is quantity of outbound data. That one is far more interesting, from a security perspective, than the length of time somebody is logged into the service for.  


Google Docs accidentally shared

| More
From SC Magazine

Users of the Google Docs application have had their information inadvertently shared.


A flaw has been identified in the system, which meant that some documents were marked down as collaborative items that allowed third parties who are also signed up to the system to access and amend them.

I've been dealing with requests within the business from people wanting to use Google Docs. Personally, I think it's secure enough for general day-to-day work. I wrote up and issued a quick bullet point list of sensible security guidelines relating to using the service, and we have a few people now using it to their benefit.

I'm much more concerned about the number of file shares within the corporate network that get set up without thought given to their contents or to whom access should be authorised. I find those all the time.