TMI or Too Much Information is a popularised phrase of recent years, that isn’t normally related to IT (well, at least not the sort of IT I’m involved in), but it is very much a truism.
Big data, data analytics, ya de da, all relating to the reality that we are (and have been) collecting Petashedloads of data for years, without necessarily contemplating what we might actually do with it (other than store it and back it up). It took the likes of Google to wake up the data admin guys and say “well, we’re gonna do stuff with this data, even if you’re not”.
Of course, the retail and consumer markets in general have been analysis bonkers for decades, but what about IT security, cyber or otherwise? It’s not a case of collecting info from a single security device – be it a syslog from some kind of firewall (of which many companies have very many), IDS, IPS, SIEM, UTM, ATM, ITN, BBC… but all of these and more, administering and collating said info and then, er, what? In the event of trying to glean info on a possible cyber attack, diving into said eMountain of data – even using a variety of tools – will possibly result in a positive outcome, six months after the effects of said attack has already rendered the company bankrupt -)
So what’s the answer (other than “42”)? Well, there are start-ups a plenty working on a solution to the aforementioned multitude of solutions that were designed to solve the original problem (which we can’t now remember what it was), most of which involve machine learning and the automation of “manual error strewn” tasks and acceleration of forensics and the search for the holy cyberattack grail. One such candidate is JASK (if you want to know what that stands for, Just ASK), with whom I had a jolly excellent conversation this week. Their understanding of the problem is right on the money, and not a million bytes away from that described above (which is very close in modern data terms). In short, it’s a platform (cloud + agents) designed to automate the collection and correlation of threat alerts from all manner of sources – it is completely open in this respect – and then analyse said alerts, providing prioritisation and – theoretically – acceleration in getting to the epicentre of the problem. So kind of Splunk take two? Maybe, but key to its potential success is in working with these existing products, so not alienating the vendors, nor – vitally – the end-user customers who have already invested gazillions in said solutions: “just one more wafer-thin mint, sir?”
Of course the proof is, not in the pudding – too late by that stage of the meal – but in justifying its existence (and cost, which is sensibly by company size, not some archaic CPU count, or active users, or random number generator) which can only come with actually using the product and seeing what it spits out, what time it saves, and what businesses it saves. As ever, it’s a case of watch this space…