No poop, Datadog loops in Hadoop

Cloud applications are built (at least they should be) for scalability and agility and movement and change.

The big scoop: why cloud applications need monitoring

As such, cloud applications require monitoring tools so that we can look at their workflow metrics, external dependencies, stress levels, idle periods, bottlenecks and resource allocation ineffeciencies.

Datadog is one such tool.

The firm has now announced support for the Hadoop framework for distributed processing of large data sets across clusters of computers.

No 'dissing your dog' please, he might be ready to go fetch and look after your cloud data metric for you

No ‘dissing your dog‘ please, he might be ready to go fetch and look after your cloud data metrics for you

Hadoop users can now use from Datadog’s dashboards, full stack visibility (and correlation), targeted alerts, collaborative tools and integrations.

“Integrations can be turned on immediately, adding to the list of technologies DevOps teams can monitor with Datadog,” said Amit Agarwal, chief product officer at Datadog.

“With most distributed systems running on many machines, when things go wrong it can be quite difficult to pinpoint exactly what happened or why. This is especially true in a team setting where everyone is running simultaneous, siloed investigations. This is why we integrated Hadoop with the Datadog platform. DevOps teams now have the ability to turn data produced by Hadoop into actionable insight.”

By adding the power of Datadog to Hadoop, users can now see hundreds of Hadoop metrics alongside their hosts’ system-level metrics, correlating what is happening within Hadoop with what is happening throughout their stack. Users can also avoid problems by setting alerts when critical jobs don’t finish on time, on outliers or any other problematic scenarios.

This product update here also supports: HDFS integration, MapReduce, YARN abd Spark integration.

Cirba: cloud analytics is a front-end responsibility, first

Ayman Gabarin, Cirba’s senior vice president, Europe, Middle East and Africa followed up with Computer Weekly on this story and added, “Monitoring is undoubtedly important to ensuring performance, but ensuring it goes into the right cloud venue in the first place is what will make or break whether or not an app does well in the cloud.”

Gabarin insists that careful strategic planning, placement and positioning are key to creating a beautiful cloud architecture.

“By using analytics on the front end to evaluate each workload according to business, operational, resource and technical requirements including considerations like proximity to key application components, enables workloads to be matched to clouds that meet all the criteria for performance, compliance and cost. A lot of issues are avoided and costs are reduced with the right placements,” he added.

Cirba software automatically optimises routing, reservation, placement and infrastructure decisions to ensure efficiency and fit for purpose infrastructure.