Recently in Technology Category

Compliance, change control, and firewalls

| More

What exactly does "compliance" mean? If I'm reviewing a product and conclude that it is compliant against some particular policy or regulation then what that really means is that it is compliant at that particular moment in time. This is a point well made on the PCI blog and it's something that businesses should reflect on regardless of whether they are aiming for compliance with regulation or simply wanting to comply with an internal policy. The reason, obviously I hope, is that web products in particular present a moving target. A component deemed secure today might be completely insecure tomorrow when the next change is implemented .

It's therefore important that change control includes processes for ensuring continual compliance. Changes should be reviewed for security impact, and their risk assessed. It sounds easy doesn't it? But that's assuming that there are suitably security-aware people available and also that the potential impact of a change can really be known. For instance, a change request to open a new port on a firewall can be easily assessed, however, a request to deploy a new customer-facing enhancement to a web product needs a diffierent approach. Having security baked into the SDLC is an obvious good place to begin however, playing devil's advocat here, let's say it's a change that needs to be implemented fast to resolve an earlier customer-facing (non-security) issue. There is going to be a good deal of pressure to deploy: the business needs to maximise revenue and get the product working fast, the development group is going to be under maximum pressure to deliver fast and the developer will be focusing on the functionaility rather than the security. Can a compliant product still be delivered under those circumstances?

The answer, in my opinion, is probably no. The most likely result will be after-the-event bug fixing and a period of non-compliance. It's an expensive way to do things and while we all know it's cheaper to address problems prior to deployment the real-world is often not as we would really like it to be.

So, faced with a need to be compliant on the one hand but a star product requiring a quick but complex change on the other what can we do to ensure we don't attract the ire of the auditors? PCI clause 6.6 might provide some of the solution. Namely "installing an application layer firewall in front of application." Yes, I know that I've previously stomped on this clause and bemoaned the fact that it's not problem solving from a development perspective but from a business perspective it's a different matter because while the bugs still sit on the server, at least the device is protecting against them being exploited.

It's not a perfect solution and I'd prefer to see more review and testing during the life-cycle but as I stated earlier, the reality is frequently less than perfect and our role at the end of the day is to adequately mitigate risk. In my mind that means ensuring that the business can still function, protects assets, and meets it's security and compliance obligations in the easiest, most cost-effective manner.

So, going back to change control, which is where we came in for this blog, we need to work on improving processes to ensure that security impacts are reviewed but we also need to be aware that this cannot always be assured. When that is the case there are alternative solutions that help us maintain compliance. Make sense?

One last word, and sticking on a PCI theme I wonder if TJ Maxx - the latest company to suffer a breach of credit card data were PCI compliant. There's a clause on the TJ Maxx web site privacy policy which states: "Unfortunately, no collection or transmission of information over the Internet can be guaranteed to be 100% secure, and therefore, we cannot ensure or warrant the security of any such information." That, in my opinion, is a crass clause that has no regard for their customers. Instead of making such blunt and utterly pointless statements surely it would be better to state that they are doing all they can to mitigate risk. However, current events show that they obviously were not and I wonder how their own change control measures up. Anyway, another example of a business making the headlines for the wrong reasons and it should serve as a reminder that compromises are still very much a danger if we leave the door open. Amen!

Unit testing software

| More

I've been meaning to talk about unit testing software for a while. This is software that can analyse source code on the developers desktop and identify errors and security vulnerabilities before they hit production.

I prefer unit testing to black-box testing and think that it's far better value for money. For a start it encourages software quality because developers get to see the errors while they work, it raises awareness, supports training initiatives, and consequently fewer errors are put into production (where we all know they become more expensive and difficult to fix). It also fits right into the SDLC regardless of methodology, including Agile, and adds value to the compliance due diligence process.

Using unit testing tools throughout the lifecycle does in my opinion mitigate a good deal of product related risk. Couple that with grey box testing and you have a powerful armoury against code related vulnerabilities.

One particular vendor I've spent some time talking to is Fortify Software. I've been very impressed by a number of things: the ease with which their solution fits into just about any development environment, ease of use, and quality of reporting are all excellent. There are other tools as well such as JTest which I've heard good things about from development groups who use it, and FXCop which is an open source analysis tool for .NET developers.

Fortify Software maintain a blog at It makes for a very interesting read.

Happy Thanksgiving (and more on vulnerability scanners)

| More

Happy Thanksgiving day! Many of my colleagues are American and so today should be a quiet one on the email front - although you can bet there will always be at least one of them sneaking a message out on the blackberry whilst on a trip into the garage to get some more beer.

I mentioned a couple of days ago that I was looking at a new version of a well known web product vulnerability scanner. As previously reported I'm none too enthusiastic about the whole automated testing area - I think it's lazy at best and inaccurate at worst. So, this week I've had a chance to do something I rarely get time for these days, and that's roll up my sleeves and do some fun stuff with web site pen testing.

Application Firewalls

| More

I was re-reading the VISA CISP data security standards documentation and reminding myself firstly, of what an enjoyable read this is, and secondly of some of the recent new clauses put in to entertain us. Clause 6.6 (on page 8 of the document) states that application layer firewalls are "considered best practice until June 30, 2008, after which it becomes a requirement."

Once I returned from buying shares in various application firewall vendors I re-thought the merits of this clause and whether or not it is really something that should be a requirement.

I know from experience that application firewalls have their place. For example, take an instance where you have a vulnerable web product and need a quick fix for multiple problems. Sure, the underlying problems still remain but in the meantime you have defences in place that work to mitigate the immediate risks. So the device becomes the proverbial rug that the dust gets swept under. However, CISP are requiring implementation of a device regardless of the risk status which means that you have to find the budget, find a person to perform the administration and management of a device that needs to be updated with new rulesets each time you perform an application change, and then plan upgrades and replacements to this device throughout the entire lifecycle of your product.

Surely if you have applied all of the previous mandatory clauses in the CISP documentation then you will have, in my opinion, mitigated most of the product related risks to a pretty substantial degree, demonstrated due-diligence and have a secure product. I don't believe that the addition of the firewall buys much extra risk mitigation in this instance.

As you will learn, I'm a believer in process before technology and not trying to solve problems before you understand their causes - and I think that this requirement is overkill. What do you think?