Did I miss anything?

| More
Don't be alarmed, I am still alive! After a long absence I'm back after an unofficial hiatus. Past six months have been particular busy and it's been awhile, but that's always the risk of writing a blog when you have a full time job, a life and a family. So, I'm back and better than ever, well at least as good as ever...

So what's been going on:

The Obama administration identifies that cyber security is a national security priority for the US which promptly means that two (Melissa Hathaway, the top White House aide for cybersecurity & Mischel Kwon, head of the US Department of Homeland Security's Computer Emergency Readiness Team) key personnel resign;

Hot footed after the US efforts UK government decides it needs its own central cyber security agency which will be staffed by "slightly naughty boys";

The fate of Gary McKinnon is finally decided and he's extradited to the US. Who knows what he'll really face;

The Russia-Georgia conflict blamed for Twitter and Facebook outages but was apparently launched by Russian crime gangs. The distinction and blend between traditionally pigeon holed threats has finally broken down.

So, anything else important happen while I've been away?

They've all lost their mojo

| More
I'm currently recruiting a Security Director to replace me as I move on to pastures new. I must admit to being wholly underwhelmed by many of the CVs that have come my way and also rather upset by the number of applicants currently out of work. Anyone who thinks information security is a recession proof career is wrong because around half of the CVs received are from individuals made redundant from their previous jobs.

The other disappointing thing is the number of people I'm seeing who are great at writing policy and delegating jobs to third parties but have lost the hands-on technical skills (if they ever had them). From my perspective, the ability to read and interpret a network scan, review an architecture design or read a log file, identify the important issues (as opposed to the trivial), and describe why the issues are important and the work that needs to be done to fix them is bread and butter stuff. Not only that, but it's the fun part of the job - it's the bit we should all really want to be doing! Writing a policy document is important, but it's hardly something to be proud of being able to do. Bring me candidates who still have some security mojo!

PCI at the House of Representatives

| 1 TrackBack
| More
From Computerworld.

At a U.S. House of Representatives hearing yesterday, federal lawmakers and representatives of the retail industry challenged the effectiveness of the PCI rules, which are formally known as the Payment Card Industry Data Security Standard (PCI DSS). They claimed that the standard, which was created by the major credit card companies for use by all organizations that accept credit and debit card transactions, is overly complex and has done little to stop payment card data thefts and fraud.

I disagree that the standard is overly complex - in fact most of it is straightforward, common sense information security. The reason it has proved to be ineffective is because organisations focus on ticking the compliance boxes rather than taking the holistic approach to security that's needed. There's enough ranting on this subject elsewhere - the best being on Anton Chuvakin's blog - and I have little to add.

Ranums's Rant - Risk Takers and Security Disasters

| More
An interesting rant on Information Security from Marcus Ranum online here. I picked up on the following quote:

The security team explained why it was a bad idea; in fact they wrote a brilliantly clear, incisive report that definitively framed the problem. So the executive asked the web design team, who declared it a great idea and "highly do-able" and implemented a prototype. Months later, the "whiners" in the security team were presented with a fait accompli in the form of "we're ready to go live with this, would you like to review the security?

Sounds familiar. However, the important point is that security (or lack of it) is not, and should not be, the sole deciding factor in determining whether or not something gets done. The point is that the risks are known and reported. Management can then use it as a factor in their decision making process. If security was the sole deciding factor then the business would have collapsed a long time ago and we'd all still be using type-writers and chinagraphs.

Good leaders take risks. We'd like assurance that they are actually balancing the risks and benefits before making a decision rather than just running on gut instinct but sometimes they will make a wrong decision and security factors might sometimes mean the project fails or suffers an incident. However, without risk takers, there would be no innovation and no business growth.

It can sometimes be infuriating to report on risks and see them being taken anyway, but so long as you have identified what those risks are and reported them appropriately then it's job done.


| More
Loads of coverage of the GhostNet story at the weekend. The FT, NY Times, Sydney Morning Herald and BBC all highlight the Munk Centre for International Studies report on the cyber 'spying' network which has compromised government computer networks all around the world.

For those in the information security community it should come as no surprise that there are serious and organised individuals and groups using coordinated computer resources to deliberately and maliciously infiltrate attractive target networks. E-mail based threats are not new and have been the modus operandi for a whole bunch of people for at least the last five years or so. Back in 2005 Israel's hi-tech business sector was stunned by a major computer espionage scandal involving targeted trojan e-mail attacks. The anatomy of attacks has changed, accept it and lets move on.

Munk's report heavily hints at Chinese state sponsorship but there's no conclusive evidence at all and a causal relationship is draw between the physical location of the command and control infrastructure and the perpetrators of the activity. In this case a Chinese computer is implicated but that doesn't mean that China itself is the sponsor of GhostNet.

Heaven only knows how many unprotected, unpatched, poorly configured and poorly managed computer networks using unlicensed O/S there are in greater China. It's an easy and rich play ground for international organised e-crime to take advantage of inadequately protected computers to create multiple platforms for their attacks. Shooting fish in a barrel comes to mind. 

This is a fast and highly dynamic field and pinning the blame on a nation is, IMHO, too simplistic and naive. We're unlikely to ever know who the real source of this activity is and let's just accept that and get on with more valuable ways of using our time and attention. Instead lets focus our energy on raising standards of computing through education and awareness about the dangers everyone faces from vulnerable, poorly protected or poorly managed computer networks. We're all in this together!

Far from smart phones

| More

Does anyone know of a smart phone or mobile device that enforces account and privilege separation?

It's been a long held good practice to run user accounts with least level of system privilege and only use admin accounts when you absolutely have too. The obvious danger is that if you're always operating with elevated admin rights and if your device is compromised then the attacker runs with your admin rights. This is far from a perfect situation and can easily lead to security meltdown.

All the popular mobile devices and smart phones I'm aware of operate with full admin rights all the time which seems like security madness to me. Code signing of downloaded apps will help to establish some level of trust in the source on content, but all bets are off with content based attacks arriving via e-mail.

Security, scale and functionality - Part 3: Functionality

| More
I love system functionality, it's a great thing. It brings a rich and dynamic user experience or empowerment through seamless processes to get things done. Whether it be business functionality or technical functionality, we now have more system functionality at our finger tips than we've ever had.

In the 1980's BT offered a dial in bulletin board type service known as Prestel. It was a simple service based on Viewdata technology with a private electronic mail capability and was my first experience of being in an online networked environment. Prestel hosted the UK's first home online banking service (Homelink). It was a basic service to say the least. You could view your account balance and pay specific utility bills but that was about it. At the time it was revolutionary, but today limiting your service to these features would be archaic.

Lets jump back to today when Internet banking is the norm and the range of banking functionality is enormous (I can wire transfer money anywhere in the World from anywhere in the World without delay, interruption and at minimal cost) and it starts to become obvious that retail banks, for a number of reasons, have dismantled their internal processes and controls and have brought them out of the back office and have put them right into every living room, coffee shop and street corner (a scale issue).

There's no doubt that there are real rewards with giving you access to all this extra functionality and I'm a real fan of Internet banking myself. However, the problem comes when/if your account becomes compromised. Back in 1984 I really wasn't bothered as Homelink really wouldn't let you do that much therefore the damage was limited and the impact minimal. In today's environment all bets are off and your account easily plundered or your Visa charged before you know what's hit you.

And therein lies the rub. The more functionality we provide in information systems the more opportunity we create for nefarious or malicious activity to flourish. Add in the scale dimension and the problems start to be compounded even further. When I started this thread I argued that you could only ever have two elements from scale, functionality, and security and I still believe this to be true.

If this is the case how can we approach the security element in order to bring this vital element into the picture? If we can't provide the right amount of security into highly scaled, functionally rich systems then how can the general public trust and embrace them? People will readily adopt new systems when they see a clear reward for changing their behaviour but will not go past the point where the risk outweighs the attached rewards.

My last post in this series will look at what we really mean, and can expect of security in this environment and what we as information security professionals and our Boards of Directors need to clearly understand about these realities.

(Postscript: Thinking about my first adventures with Prestel has brought back loads of memories. For those of us in the UK this cutting edge service significantly changed the IT security legal landscape. After Prince Philip's Prestel e-mail account was compromised the Computer Misuse Act 1990 was introduced and I guess things have gone downhill ever since ;-)

Top 5 information security annoyances - #2

| 1 Comment
| More
Few of my blogs have generated so much venom to be thrown in my direction than this one from last week. One blogger from America has gone so far as to write two very lengthy pieces in response while the highly respected security guru and fellow blogger, David Lacey, referred to it as being drivel. Another public commentor calls it trite.

I was well aware that my remarks about the usefulness of security awareness programs and risk models in particular would raise some eyebrows. However, I welcome the debate: we shouldn't be shy to challenge the accepted norms because there's plenty of evidence around that they frequently don't work.

Trite or drivel it might be....I actually started off with a list of ten!

Laptop with personal data stolen

| 2 TrackBacks
| More
Another third party vendor failing to implement decent security around sensitive data.


You've got to check out your vendors! The vendor might be at fault, but it's your data, and your liability.

BBC, BotNets and legal hacking

| More
On Monday I remarked on the BBC Click botnet investigation. I slightly regret my post because, in fact, I think they did a great job in bringing to life the potency of botnets. Legalities aside, let's focus on the fact that it only took 60 PCs to cause a denial of service situation. That's very disturbing and we all need to sit up and consider the consequences of that.

I was chatting with the CISO of an investment bank earlier today. He was wondering whether or not we should have in place a legal framework that would allow "researchers" a better way to test system security without fear of being accused under the Computer Misuse Act. It's dangerous territory but I take his point. If somebody discovered a gaping hole in my own organisations' network security then I'd be grateful for the information. Many of the third parties I legitimately employ to do discovery work do little more than run Nessus and then post a report, so the hackers view would be invaluable. But where do you draw the line between "hacking" and "research" and what assurance can be gained from an unsolicited security report?

Top 5 Information Security Annoyances

| More
I'm generally a tolerant and easy going sort of person. There's a fairly short list of things that get my goat. For instance, our local doctors surgery has a call queuing system with 6 different options. However, I know for a fact that there's only one person working in the reception and that regardless of which button you press (press 1 for a long wait etc etc) after about 12 rings you'll get through to her. How pointless is that? Our home phone number is fairly similar to a local pizza take-away place. At least a few times a week I'll take orders. Sometimes, I tell the caller that I've got no pizza left but they are welcome to have some of what I'm having...

The information security industry can be similarly infuriating at times. Here's a short paragraph on each of the five things that wind me up the most:

1. Security awareness programs

A whole cottage industry of consultants and websites has been built up around the perceived need to educate company employees about information security. It's all a waste of time and money. Certain individuals will point to a reduction in the number of lost laptops as a measure of success, or an increase in the number of people who can correctly click "a). All policies are on the Intranet" in a multiple choice questionnaire. The fact is that security awareness programs are received within the organisation with about as much enthusiasm as a plate of sick. The key to good information security is strong governance, good communication and well managed, decent processes.  Security awareness programs sap energy and resources, and have little positive effect. Drop them.

2. Compliance = security

Nothing reduces the security status of a business faster than a blind determination to achieve compliance with a policy or a new regulation. PCI is the obvious one, with whole armies of "Qualified Security Assessors" driving their company Mondeos up and down the motorways of England en-route to the next company that's fallen into the trap of believing that so long as they can tick all the right boxes they are protected as if covered by some magic force-field. Having the certificate that says "compliance" tells you nothing more than that on the day the assessor came, you had the right combination of smoke and mirrors in place to pass the test. Information security is not a compliance project. It's an ongoing program without an end-date.

3. Risk modelling

Many "experts" preach the importance of working through risk models. It's a load of tosh. No matter which way you try to do it, you'll always come out with the answer you first thought of.  You might as well use a crystal ball and read tarot cards. Nobody needs to work through a complex risk model to understand that if a retail website suffers a denial of service that it'll have some financial consequences, or  that if the internet connection is lost that there wont be  access to the..er..Internet. I've got better, more constructive and practical ways to spend my day than conspiring over risk models. Much more relevant is threat modelling - understand your systems and know the business so that you can make relevant risk-based decisions. 

4. Where are all the analysts

Here are two examples of what I mean. i) A vendor does a "penetration test" of a web site (I use the term loosely because these days most pen tests seem to involve little more than pressing a button labelled "scan now") and sends you a report highlighting several "High Risk" issues. On closer inspection you discover that most of them are either false positives, or irrelevant. ii) A network scan report is given to a newly CISSP qualified security analyst and he's asked to review it as part of a job interview. He spots the obvious highlighted security holes but doesn't question why a web server has non-standard ports open. Are we becoming too reliant on auto-scan reports? Security analysts need to be inquisitive, well practiced in basic technical skills, able to spot anomolies, and not afraid to question things that don't look right. The scan results never tell the full story!

5. It's not my fault

The first 9 years of my working life were spent in the Royal Air Force where I enlisted as an Assistant Air Traffic Controller. I learnt a few useful things out of that: how to make a good cup of tea, write backwards on a plastic board with a chinagraph , and operation of a mumeter, being three that will stick in my mind. Most important of all though was the lesson that excuses don't count. Only results. There's an evolving culture of making excuses for poor results. Lack of time, lack of resource, lack of training, lack of ability to think of a decent answer. All it shows is a lack of imagination, planning and initiative on the part of the person making the excuse. If you get something wrong it's your fault. Admit it and learn from it.

BBC violate Computer Misuse Act

| More

Software used to control thousands of home computers has been acquired online by the BBC as part of an investigation into global cyber crime.

The technology programme Click has demonstrated just how at risk PCs are of being taken over by hackers.

Almost 22,000 computers made up Click's network of hijacked machines, which has now been disabled.

The BBC has now warned users that their PCs are infected, and advised them on how to make their systems more secure.


Surely this is blatant self-incrimination. The story seems to describe the BBC being in violation of the Computer Misuse Act. Regardless of whether or not the BotNet has now been dismantled, fact is that this investigation is no different from somebody illicitly hacking in a computer to prove it's vulnerable...

Security, scale and functionality - Part 2: Scale

| More

Scale, whether it is physical or logical, brings some interesting security challenges. The fundamental issues are oversight, assurance and misplaced trust.

Extended enterprises and supply chains are a contemporary case in point. With IT systems and processes integrated across traditional boundaries understanding the totality of a system becomes nigh on impossible. And if you don't have that 'helicopter view' how can you really assess the threats, exploitable vulnerabilities and most importantly the risk to your information assets that are now out of your control?

Compliance audits can go someway to help, but you'll still not know what you don't know. And can you always be sure that your partners' are absolutely doing their bit to ensure you're covered?

So, would you sign off, accept and be responsible for the risk on something that you weren't 100% (or even 90%, 80%, 70%, 60%.... pick any percentage!) sure about? How lucky do you feel?

Council staff face Facebook ban

| More

Hampshire County Council is threatening to block staff from using the social networking site Facebook.

Bosses said they noticed an increase in use and during monitoring 46 employees were found to have regularly spent more than an hour on the site each day.

See http://news.bbc.co.uk/1/hi/england/hampshire/7936076.stm

I find stories like this infuriating. If you don't want employees to use social networking sites then block access to them. If staff are spending "too long" then in my opinion that says more about the quality of line management and supervision than it does about the individuals using the service.

Susie Squire, from the Taxpayers' Alliance, said: "Ultimately these people work for taxpayers in Hampshire.

"There is no way they should be spending this time when they are supposed to be doing their jobs on a social networking site."

Get a life Susie. What's the difference between somebody spending a bit of time on Facebook, making a personal call, gossiping with the folks on the second floor for half an hour, or going outside every 30 minutes for a cigarette break? Employers need to grow up and stop trying to control how their staff interact with their friends.

Cllr Thornber added: "We have seen an increasing trend of the use of social networks.

"We are monitoring it carefully and if we feel there is any abuse we will block the use of that individual and they could be disciplined.

I wonder what criteria will be decided to determine whether the system has been abused. The usage stats can be misleading because people will browse to the site, then leave it running in the background on their computers. Reports on usage will show hours of use and hundreds of page views when in fact there might have been very little user interaction.

The main stat I look at these days is quantity of outbound data. That one is far more interesting, from a security perspective, than the length of time somebody is logged into the service for.  


Google Docs accidentally shared

| More
From SC Magazine

Users of the Google Docs application have had their information inadvertently shared.


A flaw has been identified in the system, which meant that some documents were marked down as collaborative items that allowed third parties who are also signed up to the system to access and amend them.

I've been dealing with requests within the business from people wanting to use Google Docs. Personally, I think it's secure enough for general day-to-day work. I wrote up and issued a quick bullet point list of sensible security guidelines relating to using the service, and we have a few people now using it to their benefit.

I'm much more concerned about the number of file shares within the corporate network that get set up without thought given to their contents or to whom access should be authorised. I find those all the time.

Google latitude - power to the people

| More
In a country boasting the highest number of CCTV cameras in the world in proportion to the population; where local council workers can work as undercover spys to root out everything from putting bins out on the wrong day to whether or not a family can claim to live in a school catchment zone; where everytime a car is driven into London numberplate recognition systems can track every turn in its journey, it's a bit rich MPs calling Google's voluntary and free Latitude system a risk to privacy (see http://www.dailymail.co.uk/sciencetech/article-1160598/Google-phone-tracker-puts-privacy-danger-say-MPs.html).

"Mr Brake, MP for Carshalton and Wallington, said: 'In Britain, we have a tradition of fighting for our freedom" Hmmm. True enough if threatened by foreign invaders (although not sure how much experience the 46 year old Physics graduate Tom Brake has of being a freedom fighter), but against our own government systems the fight never started. More Citizen Smith than Che Guevara methinks...

NYPD Data Center Theft

| More
An interesting event reported in America where a civilian employee allegedly stole personal information on 80000 serving and former NYPD police officers. It's being called a "massive data breach" (see here) because it involved the theft of back-up tapes from a data center. However, it's not reported whether or not the thief had the equipment to extract data from the tapes, the right software to read the data, whether or not the data on the tapes was encrypted, or if - in the short space of time between the tapes being stolen and recovered - whether the data was put to some use. Probably not unless he had help.

This is typical alarmist reporting, designed to do nothing more than sell copy. The real issue is the process failure that allowed somebody to use an expired ID card to fool a security guard.

Two minutes worth of Googling and I came across this: An audit of the Police Department's (NYPD) data center and computer security disclosed that there is adequate physical and computer system security in the data center and that computer operations, as well as contingency plans, have been tested in compliance with applicable Federal Information Processing Standards and City guidelines....

They might want to revisit those guidelines!

So, while this was a data breach of sorts, the important bit from my perspective is the fact that a malicious employee was able to social engineer his way into a restricted area. It was an easily preventable incident.

Security, scale and functionality trade-offs

| More
If decisions about design and modes of operation all involve trade-offs then security, scale and rich functionality have got to be at the top of the feature trade-off list.

I've believed for a long time that you really can't have security + scale + rich functionality in an application, network, solution, whatever, all at the same time, in the same quantity and to any kind of degree.

Instead you can only ever achieve a maximum of two out of the three at any time. For example the following combinations could be possible:

  • Security + scale, but not functionality;
  • Or how about security + functionality, but not scale;
  • And most importantly scale + functionality, but not security.

The last one is the most interesting for me as I believe it best describes the situation that most enterprises are in at the moment. Organisations' have pushed ahead with behemoth enterprise wide systems that give end users feature rich tools, applications and permissions to perform complex data mining and analysis more so than ever before. Users' have been liberated to 'get things done', but at what cost to good governance?

Over the next week or so I'm going to be breaking down this triad of system characteristics in order to better understand the problems we all face and maybe offer some insights into how to handle these tricky trade-offs.

Next installment......  Scale.

McKinnon step closer to extradition

| More
British computer hacker Gary McKinnon has lost the latest round of his battle against extradition to the US.

See http://news.bbc.co.uk/1/hi/uk/7912538.stm

More here: http://community.zdnet.co.uk/blog/0,1000000567,10012233o-2000331828b,00.htm

This, now pointless and irrelevent case, drags on.

Rats coming out of the sewer

| More
More than two years ago I mentioned on this blog the fact that large networks are likely hosting a variety of nasty things we will probably never become aware of. This is more than just speculation and there's some good supporting evidence in this latest story about another breach of a credit card payment processor which apparently only came to light when, as quoted below:

In most of the latest high-profile breaches, the threat was found only after the forensics team came into the picture. "Existing network security mechanisms remained clueless,"

However, search hard enough on any network and I'll bet you could find some speculative evidence of unauthorised access or malware that really amount to very little of interest. Is there sometimes an over analysis of forensic results when it comes to IT systems? I've seen plenty of vulnerability test reports that over-egg benign issues into something far more serious than they really are.

I'll be interested to see where this story goes.