IBM analytics GM: data is the new cargo

There are many magic rings in this world Frodo, just as there are many job titles spanning the ever-growing realm of software application development, programming and all related disciplines that fall under the wider banner of software engineering and data management.

Of all the new job titles that the industry now wants to coin (think DevSecOpsDBA and so on), it is perhaps the coming together of the programmer/developer and the data analytics engineer that is drawing more interest than others – this new role is the data developer, so how does their world look?

General manager for IBM Analytics Rob Thomas says that it’s all about containerisation – the shipping kind and the software kind.

Thomas cites the invention and introduction of the world’s first intermodal shipping containers and the simplicity these offered in terms of load and unload with no unique assembly required.

This is a form of standardisation for efficiency that the software world has also sought to emulate. We can point to advances in TCP and TCP/IP, Linux and the new era of Kubernetes.

“The benefit of standardisation in this realm is flexibility and portability: engineers build on a standard and in the case of Kubernetes, their work is fully flexible, with no need to understand the underlying infrastructure,” wrote IBM’s Thomas, in a recent blog post.

Data is cargo

Continuing the IT to shipping containerisation analogy, we can suggest that data is the cargo… and the data developer is the stevedore of the system.

NOTE: A stevedore, longshoreman, or dockworker is a waterfront manual labourer who is involved in loading and unloading ships, trucks, trains or airplanes.

But there’s a problem, stevedoring is tough gritty backbreaking hard work – surely we can automate and computerise a good deal of the old grunt work (in real stevedoring and in data development) in this day and age.

For IBM’s Thomas there is an issue to highlight here. He says that most of the advances in IT over the past few years have been focused on making it easy for application developers.

But, no one has unleashed the data developer.

Every enterprise is on the road to AI says Thomas. But, AI requires machine learning, which requires analytics, which requires the right data/information architecture.

He asks, “When enterprise intelligence is enhanced, productivity increases and standards can emerge. The only drawback is the assembly required: all systems need to talk to each other and data architecture must be normalised. What if an organisation could establish the building blocks of AI, with no assembly required?”

IBM claims to have some form of answer in the shape of its new IBM Cloud Private for Data engineering product.

Now, please wash your hands

The product is supposed to be aligned for working with data science, data engineering and application building, with no assembly (and no need to get your hands dirty).

“As an aspiring data scientist, anyone can find relevant data, do ad-hoc analysis, build models and deploy them into production, within a single integrated experience. Cloud Private for Data provides: access to data across on-premises and all clouds; a cloud-native data architecture, behind the firewall; and data ingestion rates of up to 250 billion events per day,” said Thomas.

The big claim from Big Blue here is that what Kubernetes solved for application developers (dependency management, portability, etc), IBM Cloud Private for Data will solve for data developers attempting to build AI products without them needing to get their hands completely greasy down at the docks.

CIO
Security
Networking
Data Center
Data Management
Close