I don’t harbour any illusion that the misfortune of others, frequently
commented on within this blog, couldn’t end up also occuring to me
within my own organisation. It would be foolish to think otherwise.
Some of the incidents listed on resources such as The Breach Blog strike pretty close to home and serve as lessons fortunately learnt the hard way by others so that we can take note.
none of the incidents experienced within the organisation I work for
over the past couple of years have resulted in data breaches, but there
have been some near misses and one or two outright clangers. One that
will stick in my mind happened very soon after I took up my current
job. A web hosting environment was becoming unreliable, needing
frequent restarts of services to recover from outages. I suggested
running a number of tests to look for the presence of malware, and lo,
it appeared that the web servers themselves had been compromised. A
third party consultant was called in to do some further investigation
and the full extent of the mess was revealed: rootkits and strange port
numbers being utilised by goodness knows only what.
were taken away to be destroyed in the firey pits of hell and replaced
but clearly we needed to know how they had been so marvellously hacked.
Being ever the bright spark, I asked for the IPS logs to be reviewed.
Nothing in them. And I don’t mean that they simply revealed no useful
information. There were no logs. Tumbleweed rolled through the empty
space where the logs should have been. A journey to the dark, dank,
bowels of the data centre revealed why. The device was plugged in,
switched on, and running normally but plugged into the wrong interface
and thus offering our now distraught and broken web server all the
protection of an umbrella against a nuclear blast.
learnt #1: change control procedures had not been followed and nobody
had checked. Assumptions had been made that the devices were correctly
configured but without any resource available to review logs on a
regular basis the errors were not noticed and consequently the attacks
had not been blocked. Lessons learnt #2: leave a vulnerable web server
online and it will be hacked. Total cost of the
incident was two web servers, the costs of an external consultant, and
a good deal of grief from groups within the organisation reliant upon
the servers in question.
Small beer maybe but such incidents
serve as valuable reminders and also as a good kick up the arse. I’ll
excuse myself from this particular one because I’d only recently
started in the job, but it highlighted deficiencies in general IT
processes and helped me to prioritise where to focus my efforts over
the next few months.
There’s little to be gained from feeling
smug about incidents happening to other organisations. Interestingly,
three times as many people read my blog last Friday when I commented
rather sarcastically about the latest mishap at EDS than I would expect
on any other day of the week. There but for the grace of god go the
rest of us too….