The future impact of open source on our information infrastructure

Much of what we read relating to the future impact of open source, or cloud computing, or any significant ‘still-nascent’ technology paradigm tends to be focused on the user/consumer end.

With more “BYOD trends will impact our future use of cloud” headlines that many readers can stomach, the (arguably) more analytical approach just now is most productively taken by looking at the underlying information infrastructure level.

Alternative and non-traditional

Garter has spoken of the change that is happening at this information infrastructure ‘strata’ and suggests that by 2015, 25% of new DBMSs (DataBase Management Systems) deployed will be of technologies supporting:

a. alternative data types and,

b. non-traditional data structures.

The big truth here is that we will now need to engineer to integrate across new architectures in parallel — but also, these conjoined architectures will need to be:

a. joined in parallel, as stated

b. constructed and composed from less costly servers

c. built using “multiple threads of execution”

d. distributed across multiple networks, some of them cloud based

e. positioned to be able to perform data analytics effectively

Gartner’s Merv Adrian says that the products needed to be able to perform this work will need to “purpose-built alternatives” but that they are, as yet, immature.

“The Apache Hadoop stack, a platform for the MapReduce Java programming framework, has garnered early support in the marketplace and substitute components are already appearing at different layers,” said Gartner’s Adrian.

He continues…

“Familiar paradigms are being reshaped as new NoSQL data stores reshape expectations for persistence strategies for transactions, observations and interactions. Lower-cost alternatives are gaining traction relative to traditional RDBMSs, which may not be well-aligned to newer architectures, languages and processing requirements.”

“Data processing has always represented a spectrum of use cases from transaction/interaction processing to analytics, and many combinations of the two. Big data processing is no different, and new use cases have emerged that leverage both new and existing data not well-served by legacy solutions that were built decades ago in different computing environments.”

“This was before massive scale-out architectures were commonplace and the variety of data types now being deployed existed. New product categories have sprung up, designed to fit new needs at lower cost and with better alignment to new development tools and practices. Products and vendors are continually emerging to offer more purpose-built alternatives; these are immature, but often worth assessing.”