Tailored services and personalised advertising is great isn’t it? People and organisations are no longer bombarded by things they are not interested in or can not respond to, only those things that are relevant.
In theory that’s what all the cookies, search histories and other smart stuff on the internet delivers, especially with all the recent attention on the Internet of Things (IoT). The reality is somewhat more patchy. Getting the right data to tune, tailor or personalise effectively is not that straightforward, as many online advertising attempts and email blasts seem to indicate.
IT presents both a ‘big data’ opportunity and problem. However a focus on volume of data over and above the other aspects – velocity, variety, veracity, value – will result in more problems than opportunities. These will be less easy to dismiss than misdirected adverts.
First, veracity. Sure you can’t check everything, but when big assumptions are made based on flimsy or uncorroborated evidence it’s like putting 2 and 2 together and getting pink bananas. Even simple attributes are more complex than they appear. Take the use of a gmail account as a profiling indicator. JoBlo@gmail.com might be several people sharing one account or one of several email addresses used by JoBlo. So are you sure that what appears online to be JoBlo is actually the person in Exeter currently googling for buying a cement mixer?
Verify data before acting upon it. Look to corroborate against other data points. Take tentative steps when using partially verified data. “Can you confirm you are interested in cement mixers? We might have something you would find useful.” Above all, don’t assume; check, verify, corroborate especially with data from sensors. Do you think an aircraft relies on a single indicator of fuel level in each tank?
Next, variety – taking diverse information from a mix of sources to build a bigger picture. Interest in cement mixer plus membership of trade body for builders might indicate a professional rather than casual requirement. Then again, it might not. Is this a recurring theme or something new, are there other lines of interest that gel?
How wide does the search for relevant information need to be? That is difficult to know, but since many people are carrying an array of mobile sensors on their body, and industrial as well as consumer devices are increasingly imbued with open connectivity as well as a range of sensors (Internet of Things), there is plenty of data to choose from. Despite reportedly growing numbers of data scientists in the UK, too much data will swamp them or cost too much in terms of analytics. However, collect too little and business opportunities may be missed.
The best way to decide is to step outside of the current silos of management structures and departmental agendas and have someone or a team take an independent operational perspective. Assess each primary process, what data is available, what is unknown, what could be done differently if the unknowns were known, and what might the impact be on the overall process. Then and only then, look into what IT might be available to help produce the data.
Then, velocity. After-the-event insights are great for looking back, saying “I told you so” and apportioning blame, but they do little for making worthwhile improvements to a business process or meeting customer needs. With so much ‘real time’ data gathering and analytics available, some might feel that the problem can be sorted by a major deployment of an all-encompassing project probably codenamed something like “Rolling Thunder”.
This would be a mistake.
Finally, value. Why collect, analyse and report on all this data if there is no real value at the end? You may have identified that JoBlo is a builder with an interest in cement mixers, but if he has already bought one, it is unlikely that he is immediately in the market for another. Ensure that you understand what the desired end result is; whether it is likely; and what value this means for the business.
IoT deployments could have hugely beneficial consequences, but quite honestly in most cases it is too difficult to predict in advance. Take an incremental approach similar to the dev-ops mind-set. Start small, trial the concept, check results, refine, repeat and scale. It is a bit like planning in fog. A leap into the unknown may work out ok, but it mostly likely will not. This means a lot of budget will have been wasted either on shiny tech that does not work, or changes to the business that are not commercially beneficial.
It is worth remembering that with all the masses of data, powerful analytics and clouds of compute power and storage that even the biggest and best on the internet still deliver mismatched, untimely or irrelevant adverts to browsers. Focus on and attention to data that matters to the recipient – consumer or business process – is key. Understanding this is much harder than it first appears – perhaps we need more business process scientists as well as data scientists?