Recently in Web product security Category

Don't lay all the blame for insecure systems on the developers

| More
It's good to see the subject of secure development, and in particular the most serious coding issues that crop up within websites, making the mainstream news. See Dangerous coding errors revealed at http://news.bbc.co.uk/1/hi/technology/7824939.stm.

It's a good and comprehensive list, although by no means anything new. For example, I wrote about application security standards, referring to much of what's in the list, back in 2004 in an article for Computers & Security Magazine (which according to this link will cost you $31.50 to download. Worth every cent too!), and application security has been a recurring theme of this blog. OWASP - the Open Web Application Security Project - first published it's top ten security issues list back in 2001. There's nothing now that's new; some variants of the same issues, but it's fundamentally a list that probably could just as easily have been published 15 or 20 years years ago.

A point I made a while ago (see here and elsewhere on this blog) is that the onus of secure online systems does not just live with the developers. They can't do it all: the systems are too complex, and the attacks becoming too sophisticated. Training the developers to write secure code is merely one layer in the defences. Training up the QA guys on how to write and test a decent set of use and abuse cases is another. Having a secure network, patched and hardened, with an application firewall is another. It's an expensive business.

Complexity is, as they say, the enemy of security. The last system I reviewed was a mixture of newly written .NET code, some legacy Cold Fusion, integrated with half a dozen third party components, pushing data out to the "cloud", pulling data in from various third party feeds, all connected up to in-house back-end databases, enterprise search systems, and a third party CMS. How many lines of code in that lot? In fact, just defining the scope for testing of security is difficult enough.

Giving the developers a list of issues to look out for is fine, and I'm all for it. But it's not the solution.

AMEX and online security

| More
The cross-site scripting (XSS) flaw discovered on the website of American Express (see full story http://www.darkreading.com/security/vulnerabilities/showArticle.jhtml?articleID=212501694) is typical of the sort of the issue I see on a pretty regular basis.

The full disclosure is here: http://holisticinfosec.blogspot.com/2008/12/online-finance-flaw-american-express.html

Conversely, it's also becoming increasingly difficult to guard against such flaws because code is coming into play from so many different sources to make up increasingly complex web products: you've got your own developers writing code and downloading "useful" components to include in the build, maybe a third party developing some further controls, third party CRM systems, connections to various web services and so on. Testing of any web product needs to include the full scope of the system, and that means the third party stuff too.

More fundamentally, wherever you find weak processes, a lack of standards, poorly planned and thought-out testing, and developers being pushed to deliver as many features as possible in as short a time as possible you will also find security flaws. It's a fact.

How to avoid cross-site scripting flaws is basic stuff. There's no excuse for it but AMEX, as a result of somebody failing to check that some basic validation processes were used and tested, now find this story about the quality of their online security sprayed all over the Internet.


Web security - WAFs, Secure Code and Third Party Components

| More
Some further interesting discussion on the subject of web application firewalls here. Regular readers of this blog (hello mum) will recall that this is a subject I've raised a couple of times in the past (for instance see entries of 08/07/08 and 18/06/08).

In his blog, Rich Mogull says

If you don't have the resources for both (web application firewall and secure coding), I suggest two options. First, if you are really on the low end of resources, use hosted applications and standard platforms as much as possible to limit your custom coding. Then, make sure you have kick ass backups. Finally, absolutely minimize the kinds of information and transaction you expose to the risk of web attacks- drop those ad banners, minimize collecting private information, and validate transactions on the back end as much as possible.

I think that's great advice and wish more development teams that I encounter would take some note of. Personally I still think it's more or less impossible to write completely secure code. Recent projects that I've reviewed have done absolutely nothing to sway that belief. What I am seeing more of is a worrying arrogance in development teams who seem to think that their code will be immune from attack.

Another weak spot I'm seeing more of is in the implementation of third party products: there's more of them being used and I'm not seeing any due dilience performed before they become part of a production infrastructure and I'm also not seeing much in the way of support plans in place for keeping those components patched and updated.

Since the beginning of this decade around 14% of reported data breach incidents (169 out of 1148 according to the statistics reported in the OSF Data Loss Database) have been the result of website attacks. While statistics only tell us what has happened in the past, I think it's indicative enough that website security is likely to continue to be an issue.



Application security - don't forget third party components

| More
Developers: bless 'em. They do love to download and use stuff from everywhere and anywhere in the name of functionality, time saving, or just to show how clever they are. How much control and oversight does your security group have on the integration of untested and untrusted third party components within in-house developed application code?

This week I've seen first hand evidence of what can happen when third party controls are implemented without oversight: a massive memory leak that has brought an entire system to a grinding halt. Ironically, the first thing the organisation in question discovered was that the issue with the particular component is documented and well known even on the vendor's own website.

There's some good guidance from OWASP on the subject here: https://www.owasp.org/index.php/Research_and_assess_security_posture_of_technology_solutions.

Some years ago, whilst working as a developer, I installed a component I'd downloaded into the system that I was working on. About three minutes after receiving my code, the QA manager threw it back to me: Where's this in the technical specs? What does it do? Who reviewed it? It's not in the test plan! Two minutes later, I had my tail firmly between my legs after being bollocked by the development manager and was re-writing my code minus the component. We learnt fast back in those days. I'm not sure I can remember the last time I saw a developer write a tech spec! Hell, so far as I know, has not yet frozen over.


Breaking websites without touching the application

| More
Just as there is more than one way to skin a cat, there are many ways to break a web application. When I speak to developers and ask them if they are producing a secure system, the answer I'll get will usually mention validation and SQL Injection and so on. Good stuff. But is it all literally secure?

A neat trick I used to use when web security was the main focus of my job was to trawl the Internet looking for information posted by developers working on the system I was interested in. Google Groups, for instance,  can be a great place to look for postings made by developers looking for help resolving problems and they often include code snippets (somewhere out there is a question I posted to a programming forum twelve years ago, still there for the world to see. Hopefully the application I was working on at the time isn't!). Finding snippets of code can provide you with a great insight into how the system is being developed and clues as to how to break it.


These days, of course, developers have their own blogs and belong to various online communities. A fact discussed in this article entitled "Tiger Team member attacks developers, not apps." The article makes the point that with the right amount of reconnaissance, access can be gained to a web application without ever touching it. Chris Nickerson says: "Instead of spending time going through the application first, I figure out who the developers are," he says. "If they have Twitter accounts, MySpace pages, personal email accounts, and phone numbers... I start profiling them. I can guarantee I will find code faster than those who are directly touching the code."

It's an excellent article that highlights the point that website security is more than just validating input so that the vulnerability scanner gives it a green light, and in fact, is more than just about writing secure code. You also need to consider the security of the code itself and treat it as an asset to be protected.


Cern Website Hacked

| More

A website associated with the Large Hadron Collider (LHC) atom-smashing experiment at Cern has been hacked.

A group of hackers called the GST, or Greek Security Team, has claimed responsibility for the attack. They posted a lengthy message on the site to prove they had breached computer security.

Full story on Computer Weekly.

Some more alarmist reporting of the same incident here with an insider apparently claiming If they had hacked into a second computer network, they could have turned off parts of the vast detector...

Where do we spend the money?

| More

I was involved in a debate today where three opposing views were being taken with regards to implementing a hypothetical new online application. Given a limited budget, should most of the money be directed towards network, application or data security?

Personally I believe that a more holistic view of the situation is required. We need to understand the way the organisation works, the cultures, the regulatory environment and so on. Not to mention physical security, security awareness, training and a myraid of other factors.

When looking at new systems I prefer to work out a set of security requirements based on the risks rather than breaking things down into technical categories.we need to consider the risks, describe the controls that work to mitigate them, and then consider the degree of affinity towards the risk that each of those controls has.

Once we've considered which controls are most effective - and what we might presently have lacking - then we can describe the security requirements and where we're going to spend the money.

10 of the Biggest Platform Development Mistakes

| 1 Comment
| More

Timely and interesting read online here: http://gigaom.com/2008/06/30/10-of-the-biggest-platform-development-mistakes/, listing the 10 most commonly observed platform development mistakes. A few items in the list particularly caught my attention:

- Confusing product release with product success. I'm familiar with the huge sigh of relief that goes out when the development and implementation is completed. However, measure of success should be when the system is proven to function and has been accepted by your customers.

- Not having a business continuity plan/disaster recovery plan. I'm frankly amazed that this still needs to be stated as a requirement, and more so that I still hear of people getting push-back. An aquaintance informs me that as soon as he raised this issue when taking on a new job, his management told him that it was out of scope for information security.

- Relying on QA to find your mistakes. I like the point made in the article that you "cannot test quality into a system."

I'd like to add one more item to the list

11. Failing to consider and define security requirements. We need to understand the system and it's components, and know where data is intended to flow and be stored, Then we can understand the potential risks and the best controls. I like to set, and get agreement on a list of high level requirements at an early stage in the project. 

Larry David and Web Application Firewalls

| More

ld.jpg I'm a big fan of Curb Your Enthusiasm. If you've not encountered this excellent sitcom, it's about Larry David (left), the co-creator of Seinfeld, who plays himself as he goes about his everyday life with his wife, Cheryl. Larry has a way of saying the things most of us would like to say but would be deemed as being too socially unacceptable. For instance, on one occasion he orders a drink in Starbucks

Larry: This is very good, by the way. Thank you. Is this a cafe latte? What is that? Milk..
Starbucks employee: Milk, uh..
Larry: Milk and coffee.
Starbucks employee: Milk and coffee, yeah.
Larry: Milk and coffee! Who would've thought? Milk and coffee!
Cheryl: You know, we need to go now.
Larry: Oh my god, what a drink! It's milk and coffee mixed together! You've gotta go there! Sit down, have a doughnut! Have a bagel!

I've been accused of being a bit like Larry. I don't think so. For starters I'm not in the slightest bit wealthy, neither am I American. However, I might say things you wont necessarily agree with. For example on web application security I said "we should take a different approach....stick a ruddy great application firewall in front of everything."  Somebody responded to that one saying "It's like recommending Advil for diabetes" and called me insane. Somebody else told me I was "plain mad." (see here).

However, I am not alone in voicing reasons to be using the technology.

To address identified issues quickly Web application firewall (WAF) technology is getting a serious look. Recent technology advancements enable vulnerability assessment results to pipe straight into a WAF as virtual patches.

This approach lets us mitigate the problem now giving us breathing room to fix the code when time and budget allow.

The quote above is from an article published this week in CSO Online, written by Jeremiah Grossman. Jeremiah also talks about WAFs over on his excellent blog and goes into more detail about their purpose and limitations . For example

WAFs don't defend against every logic flaw, or even every crazy form of SQLi or XSS. Just as white/black box scanners can't identify every vulnerability and neither can expert pen-testers or source code auditors.

Back to the CSO article where the point is made that we are sitting on a huge legacy of insecure code and that "we can't rewrite history." So, the arguement is that a web application firewall mitigates the risk - note: does not solve the problem - until the code can be replaced.

How much of the risk is mitigated is open to debate, but there are lots of other things to consider too. For instance the cost of redeveloping code against the cost of purchasing and supporting a WAF. We also need to consider the value and risk profile of the product.

Anyway, back to Larry David and his views on being invited to a dinner party:

Larry: What is this compulsion to have people over at your house and serve them food and talk to them?

Web based email and a prediction for the future

| 1 Comment
| More

I've been following an interesting Q&A thread on LinkedIn where the question is asked "Should business messages be allowed to flow through personal/webmail services?"

What's interesting to note is the difference in opinion between the more technical network security analyst types and those more business orientated individuals.

Security & Systems Engineer: This should not be allowed. Security is tough enough without introducing additional systems that are not under your control

Sr Systems Architect: Business messages should not be allowed to flow through personal services, just as employees should not be doing work on the home computers.

Network and Data Security Architect: Absolutely not. It's unprofessional.

Information Security Specialist: This is a business decision not one for IS engineers

Principal Consultant: while many security researchers and practitioners would be quick to shoot down the suggestion of personal webmail, that's oversimplifying the situation

Chief Information Security Officer: The business owns the data, so they measure the risk and define the acceptable use for that information

This comes back to the point I made a few days ago about not allowing the IT department to set policies. Decisions such as this must come from the business and I wholly agree with the response quoted above from the CISO. If the business decides that it needs to use webmail services for whatever reason then it's up to us to ensure that the risks in doing so are adequately mitigated, communicated, agreed etc. Of course, I might want to recommend a different service from the one being proposed and I would hope that my views on risk would be taken into account (and don't forget to review the terms and conditions too - you want to make sure that you still own your own documentation!).

In this particular question of webmail, there is a much bigger picture to take into account too. "In the cloud" services (PaaS, SaaS) such as Google Apps, web-based email, SFDC and so on will, in my opinion, one day very soon be just as normal in the workplace as Microsoft Word and Exchange-based email are today. We need to adjust our thinking accordingly.

Archives