Don’t just say Pentaho — now say Pentaho, a Hitachi Data Systems company.
The newly divisionalised (Ed — is that even a word?) firm appears to be solidly hanging on to its own brand name under Hitachi, which will please those who have come to regard its open source credentials as among the more truly open from the firm’s inception.
Pentaho (version 5.4) this month talks up its support for (and integration with) Amazon Elastic MapReduce (EMR) and SAP HANA.
The firm has branded its ‘Pentaho Big Data Blueprint use case designs’ as a means of guiding customers with its technology at this level.
In Pentaho 5.4 customers can now use Amazon EMR to natively transform and orchestrate data as well as design and run Hadoop MapReduce in-cluster on EMR.
The technology play here is all about giving data developer shops ways to ‘operationalise’ a cloud-based data refinery architecture (more blueprints) for on-demand (obviously, it’s cloud) governed (that’s always important) delivery of data sets.
Users here can also plug into SAP HANA’s capabilities on a wider variety of data — the firm says that Pentaho 5.4’s integration with SAP HANA enables governed data delivery across multiple structured and unstructured sources.
Enterprises running Hadoop find that data variety and volumes increase over time, making reliable performance and scalability mission-critical priorities.
Pentaho recently executed a controlled study that demonstrates sustained processing performance of Pentaho MapReduce running at scale on a 129-node Hadoop cluster. The results build on the value of the Pentaho platform, delivering high performance processing at enterprise scale in big data deployments.
“We continue to deliver on our vision to help organizations get value out of any data in any environment with Pentaho 5.4,” said Christopher Dziekan, chief product officer, Pentaho. “Our open and adaptable approach means customers choose the best technology for their businesses today without the worry of being locked-out in the future.”