Facebook 'likes' (other firm's datacentre construction techniques)

bridgwatera | No Comments
| More

Facebook's outstanding international reputation for information ownership and privacy develops one stage further this month.

1 lulue.jpg

The social networking behemoth is being sued by BladeRoom Group, a British engineering company that claims Facebook has stolen its datacentre construction techniques.

BladeRoom claims to have championed modular datacentre design plans that employ prefabricated component parts.

The firm says it communicated with Facebook on its energy-efficient practises as far back as 2011.

The lawsuit points to Facebook's subsequent construction of its facility in the icy northernmost Swedish city of Lulea and says that this location was built, in part at least, to design plans inspired by the BladeRoom template.

Multiple news sources point to Facebook's explanation of its so-called 'Rapid Deployment Data Center' concept, based upon a repeatable component design philosophy. It is this precept that BladeRoom appears to have taken particular umbrage with.

BladeRoom further contends that Facebook initiated the Open Compute Project as a means of propagating and sharing this design theory, but while also taking credit for the initial Intellectual Property innovation kudos and ensuing community philanthropy.

According to IDG News Service, lawsuit details include the note that, "Facebook's misdeeds might never have come to light had it decided that simply stealing BRG's (BladeRoom Group) intellectual property was enough. Instead, Facebook went further when it decided to encourage and induce others to use BRG's intellectual property though an initiative created by Facebook called the 'Open Compute Project'."

BladeRoom is now seeking financial damages and an injunction to stop Facebook building with modular techniques. The firm will need to prove that it owns the rights to the concept of 'adding rooms on and building extensions' if it is to win its case. Meanwhile, Microsoft along with HP, Google, Amazon and others have been known to also employ extensive use of modular data centre design techniques in a space that is becoming increasingly muddied.

If BladeRoom's claims hold any water (and if Facebook has any solid culpability to be called out upon) the crux may hinge around the finer smaller details.

BladeRoom specifies that its construction concepts started out in relation to the construction of hospital buildings - and, crucially, IDG reports that a Facebook engineer presenting at the OCP Summit last January said that, "Lean construction approaches [of this kind] are often used in hospital buildings."

Something fishy may be going on and it's not just the herring in Sweden.

Images courtesy of https://www.facebook.com/LuleaDataCenter

15173_602782266457132_8288391834731907204_n.jpg

10389198_706096326125725_5798370725412548092_n.jpg

Do SmartBears shhh (sponsor Swagger), in the woods?

bridgwatera | No Comments
| More

The world's most obvious questions are of course:

1. Does the Pope wear a pointy hat?
2. Does Lady Gaga wear a telephone on her head?
3. Do bears shhh in the woods?

Question 3 relating to the query: does open source API testing and development tools company SmartBear now assume sponsorship of Swagger API project in the tortuous woods and forests of the modern open source jungle, obviously.

Bear Grylls (Photo courtesy of Bear Grylls).jpg

Mercifully, the answer is yes, SmartBear has acquired the Swagger API open source project from Reverb Technologies.

Swagger is the leading API description format used by developers in almost every modern programming language and deployment environment to design and deliver APIs that fuel IoT, microservices and mobile applications in the connected world.

SoapUI and Swagger

With this acquisition, SmartBear is now the company behind the two most widely adopted API open source initiatives, SoapUI and Swagger.

"Swagger has been the clear leader of the API description format discussion for several years - its ecosystem and passionate community is unsurpassed in the field," said Ole Lensmar, CTO at SmartBear.

"We look forward to working with Tony Tam, Swagger's creator, to give Swagger the dedicated backing and support it needs for growth, primarily to ensure the open source project's evolution but also to ease its adoption into enterprise scenarios."

Swagger is a representation of RESTful APIs - with it, API developers can deliver interactive documentation, client SDKs and discoverable APIs.

With its code generation capabilities and open source tools, Swagger makes it easier for developers to go from design to implementation.

The API committments

SmartBear says it is committed to keeping the Swagger specification and code open and driven by the community, and encourages contributions through evangelism, documentation and tooling.

The company is engaging industry leaders to create an open governance model that supports the evolution of the Swagger specification in a vendor-neutral and collaborative manner.

As part of its commitment to Swagger, SmartBear will be investing in development to evolve the specification and toolset, as well as providing commercial support offerings for enterprises using Swagger.

The company will also be developing and providing resources to help developers adopt and use Swagger and the Swagger tools.

Bear Grylls (Photo courtesy of Bear Grylls)

MongoDB on Couchbase agility: nice idea, now let's talk about real flexibility

bridgwatera | No Comments
| More

Couchbase went all bullish this week and said that its Multi-Dimensional Scaling capabilities in release 4.0 of Couchbase Server are the best thing since sliced bananas.

The firm was all technical and polite to start with and said that multi-dimensional database scaling provides the option to isolate database query, index and data services.

Yup, we're with you so far, what's the problem?

No wait there are more guts first, Couchbase also explained that a multi-dimensional approach means that hardware resources can be independently assigned and optimised on a per node basis, as application requirements change.

That's still really nice and interesting, what's the problem?

Ah well then, Couchbase went all points-scoring PR-centric and issued a press release saying that, "Unlike MongoDB, Oracle, Cassandra and other databases that have a limiting 'one size fits all' approach to scaling, Couchbase is..."

Couchbase is what?

Oh sorry, Couchbase is enabling organisations to precisely provision hardware to meet application performance requirements.

CEO Bob Wiederhold rammed it home and said that with Multi-Dimensional Scaling, enterprises can independently assign and scale the index, query and data services to specific servers.

Yeah we got that bit, what else?

"This improves performance, reduces hardware costs, and enables enterprises to support a much broader set of applications with a single database: Couchbase Server," said Wiederhold.

That's quite up front to shout out your open source buddies in this way isn't it?

Ah well not so much, this is commercial open source and all is fair in love and unpublished code documentation, as they say.

What did MongoDB have to say about it?

Mat Keep, principal product manager at MongoDB agreed that different applications have different requirements.

He also said that being able to deliver the widest range of applications from a single database has been a hallmark for MongoDB since its inception -- and key to its broad adoption.

"Regarding this launch, it's far too early to know much about the product, given the lack of details -- and that no code or documentation has been published," said Keep.

"For many years now, MongoDB has provided the industry's most flexible scaling and workload isolation. With replica sets and location-aware sharding, users can maximise their resource utilisation, precisely allocate specific hardware to specific data and processing, and adapt to evolving demands in their deployments, without taking their applications offline. As for any specific performance claims, we'll just have to wait and see."

Gosh it's all getting a bit heated in the open source database wars isn't it? Well it is a bit and Oracle hasn't even stepped in here yet?

Let's go with a final word from a level-headed dimension-agnostic analyst of good repute.

"Enterprises are faced with a broad range of data processing requirements, for which they have traditionally relied on extending the relational model and, more recently, combined a variety of specialist NoSQL databases," commented Matt Aslett, research director, data platforms and analytics at the 451 Group.

Aslett says that his firm's research suggests that enterprises are making strategic investments in more agile, multi-model databases that serve a variety of needs.

The Computer Weekly Open Source Insider blog says that multi-dimensional multi-model multi-modal (note both model and modal) is compelling attractive and needs more discussion.

Let's just all play nice together shall we?

Telco sector OPNFV project champions open network services

bridgwatera | No Comments
| More

The Open Platform for NFV (OPNFV) project is gaining momentum.

8726187t812.png

This group is a community-led industry-supported open source reference platform for Network Functions Virtualisation (NFV).

TechTarget defines NFV as an initiative to virtualise the network services that are (or were previously) being carried out by proprietary, dedicated hardware -- NFV is part of the wider industry shift towards network and application virtualisation.

Open Platform for NFV is a carrier-grade, integrated, open source reference platform intended to accelerate the introduction of new products and services using NFV.

It brings together service providers, vendors and users to collaborate in an open forum on advancing the state-of-the-art in NFV.

Director of NFV Heather Kirksey has recently blogged to note that Mobile World Congress was OPNFV's first big marketing event.

The OPNFV project was launched in September 2014 with an intention of developing an open source reference platform for NFV.

The community, gearing up for its first code release, has been growing steadily with widespread support from service providers, vendors and open source communities.

If NFV is as successfully embraced as proponents like OPNFV want it to be, then it could decrease the amount of proprietary hardware needed to launch and operate network services.

Well executed NFV will see network functions effectively decoupled from traditional hardware devices so that network services are executed by routers, firewalls, load balancers and a range of dedicated hardware devices that will be hosted on virtual machines (VMs).

Tutanota: open source encrypted email

bridgwatera | No Comments
| More

Tutanota is a German open source encrypted email startup lauded as a direct alternative to Google Gmail

1efihweoifhw.png

The product this month comes out of beta after a beta programme that has seen nearly 100,000 users test it out over what is a full year.

Tutanota automatically encrypts all your data on your device.

A users's emails and contacts stay private.

Even subject and attachments are encrypted -- Tutanota is licensed under GPL v3.

For businesses using Outlook, there is a neat plugin that integrates seamlessly.

Google has previously offered software called End-to-End to encrypt gmail messages.

According to a Google blog post from last year, "Gmail has always supported encryption in transit by using Transport Layer Security (TLS), and will automatically encrypt your incoming and outgoing emails if it can. The important thing is that both sides of an email exchange need to support encryption for it to work; Gmail can't do it alone."







Trillian Mobile: RoboVM zaps Java apps to iOS & Android

bridgwatera | No Comments
| More

Trillian Mobile has pushed out the first commercial release of RoboVM.

Trillian (yes, obviously a Hitch Hiker's reference) Mobile's RoboVM was created in 2010 as an open source project to turn our planet's 10 million Java developers into business and consumer mobile app developers for both iOS and Android devices -- quite a claim & challenge.

hitchhikers_guide_trillian.jpg

More specifically, RoboVM's goal is to turn the world's 10 million Java developers into cross-platform mobile developers by making Java 8 work on both iOS and Android devices.

With RoboVM, developers can use the tools and JVM programming languages they are familiar with and can share code between Android and iOS apps without compromising the user experience.

The firm claims that "hundreds of apps" built with RoboVM have already been published to the App Store and are growing in numbers.

"In the past six months, thanks to hard work, successful recruiting and angel investments, we accelerated the commercial release of RoboVM, which developers can download and use today", Henric Müller says, CEO at Trillian Mobile.

"RoboVM has great performance, comprehensive coverage of iOS APIs, and integrates smoothly with our build system. It's a Java-loving iOS developer's dream, come true!", said Michael Bayne, CTO of Three Rings, a San Francisco game development company.

RoboVM-Press-Release-nologo.png

The RoboVM 1.0 release includes core open source technology, allowing full access from within Java to iOS APIs and third party libraries, unrestricted for both non­-commercial and commercial use.

Developers can write apps entirely in Java or alternative JVM languages, including the UI layer, which can be created using Java bindings to native iOS APIs or JavaFX. By using JavaFX, apps can share 100% of their code across both Android and iOS.

Image credit: Comedy.co.uk

Microsoft Developer: free cloud & web app monitoring on Java SDK for Application Insights

bridgwatera | No Comments
| More

Microsoft is big on technology shows this season, seats at its much-loved 'Build' software application developer conference will soon be filled by the devoted MSDN cognoscenti.

The event itself is already sold out and only the lucky few get to attend.

Not content with this forthcoming developer-fest, the firm used its appearance at EclipseCon this week to release the Java SDK for Application Insights.

ensure-availability.png

Application Insights is part of Microsoft's Azure universe and exists to detect issues, solve problems and continuously improve web applications.

Its users say that it can provide a comprehensive view of service behaviour, reliability & performance.

Microsoft's says that this release comes as part of its "commitment to making the app telemetry and analytics capabilities of Application Insights available to a broader set of cloud and web app developers" - no less.

As such, the new SDK, which enables Java developers to gain insights into their production Java web apps, is available for free here.

Microsoft Application Insights offers developers the following application usage, health and monitoring capabilities:

Application Performance Monitoring - Application performance and failure information is retrieved automatically when Application Insights is added to a web project.

Usage Pattern Analysis - Adoption, interaction and engagement trends become apparent by simply inserting a JavaScript snippet into a web page.

HTTP Request Tracking - Track requested resources, the number of unique users initiating requests and number of failed requests.

Custom Event, Metric and Exception Tracking - The telemetry API supports tracking customisation.

Application Insights Portal for Trace Log Exploration - Simply add the appender to the logging framework to explore trace logs using a the Application Insights portal.

According to Microsoft's Harel Broitman, when you publish a Java web application, you want a clear view of what users are doing with it and how it's performing.

"Your most effective plan for future work comes from a deep understanding of how people use what you've already provided: which features they like, what patterns they follow, and what they find difficult," writes Harel Broitman.

Broitman insists that developers will want to know that an application is performing well in terms of how quickly it responds and how its performance varies under load.

"If performance drops or exceptions are thrown, you'd like to be notified quickly and to diagnose the issue you'll want powerful filter and search facilities to investigate the event traces," he said.

Computer Weekly Developer Network hopes to report extensively on Microsoft Build when the event arrives, if Microsoft would like that ;-)

1184.clip_image0014_thumb_01B04C26.png

Interoperable DevOps Logs-as-a-Service (IDOLaaS)

bridgwatera | 1 Comment
| More

Log management and analytics services company Logentries is integrating with Elasticsearch's Logstash, an open source solution for managing logs and event data.

logstash.png

What is a log?

A log is a time-stamped documented record of a data event or transaction (often recording user access requests) produced automatically by software applications and the wider systems that they reside within.

The integration of the two firm's technologies means users can forward logs collected via Logstash into the cloud-based Logentries service for analysis and visualisation.

The Logentries Open API allows extraction of data from the cloud service for Logstash users for interoperability.

User opinion

"It's important for our ops and support teams to be able to integrate as many tools as we can to improve performance and time to resolution for our users," said Stephen Keeler, IT Manager at Fusebill.com.

"The Logentries and Elasticsearch integration brings together two great logging services and enables me to collect, normalise and perform transformations on these events using Logstash in my on premise environment and then do real-time analysis and troubleshooting using Logentries cloud service," he added.

The Logentries and Elasticsearch integration offers automated configuration for sending Logstash log data into the Logentries service where users can immediately start to tag, search, and visualize in real-time.

For existing Logstash users, the plug-in provides a way to take advantage of both solutions for storage and analysis; forwarding key data to Logentries when real time analysis and notification is required, but also taking advantage of the open source model and the Logstash data transformation pipeline.

Cloud interoperability, is there any hope?

bridgwatera | No Comments
| More

Database clustering technology company Codership has been admitted to Canonical's Ubuntu OpenStack Interoperability Lab (OIL).

KPxeaxI1_400x400.png

The firms say that Codership Galera provides the high system uptime and scalability without data loss necessary for OpenStack components such as Keystone, Nova and Neutron as well as Trove, the database as service for OpenStack.

Compatibility and interoperability

Currently working on more than 3,000 Ubuntu OpenStack combinations per month, OIL tests and validates the interoperability of hardware and software in a purpose-built lab giving all Ubuntu OpenStack users certified technology.

Codership's availability on the OIL program is an opportunity for the Finnish company to share ideas with some of the world's biggest technology companies such as HP and IBM.

Joining OIL means Galera Cluster for MySQL, Codership's flagship product, will be tested and be compatible with thousands of Ubuntu OpenStack implementations.

Along with the participation in the OpenStack Interoperability Lab program, Codership will maximise the benefit of Juju (Canonical's leading orchestration tool), by developing Charms.

These are sharable and reusable expressions of DevOps best practices, enabling rapid deployment of Galera Cluster in a variety of environments.

Juicy Juju

Codership aims to speed up configuration, deployment and scaling out of Juju for OpenStack and possibly to other clouds such as Amazon EC2 and Microsoft Azure.

John Zannos, veep of cloud Alliances at Canonical has said that databases are one of the primary application workloads which he sees clients using on Ubuntu OpenStack.

Microsoft & Google developer connection on TypeScript Angular 2

bridgwatera | No Comments
| More

With Microsoft's Build 2015 conference for software application developers almost within touching distance now, the firm is clearly not saving all its programmer enrichment for the main show itself.

0820.Sublime_Intellisense.png-550x0.png

During the opening keynote at ng-conf in Salt Lake City, members of the Microsoft TypeScript team joined Angular's Brad Green to announce that the Angular 2 framework will be written in Microsoft's JavaScript superset, TypeScript.

What it means

A free and open source language developed by Microsoft, TypeScript is a typed 'superset' of JavaScript with classes, modules and interfaces -- it compiles to plain JavaScript on any browser on any host and on any operating system.

Developer by Google, Angular (or AngularJS to some) is an open source web application framework and JavaScript library for building websites and web applications.

According to Google, HTML is great for declaring static documents, but it falters when we try to use it for declaring dynamic views in web-applications. "AngularJS lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop."

TypeScript evolution

Microsoft insists that working closely with a rich library like Angular has helped TypeScript to evolve additional language features that simplify end-to-end application development, including annotations, a way to add metadata to class declarations for use by dependency injection or compilation directives.

Previous versions of Angular were written in JavaScript, but TypeScript will now offer a new set of tools in Angular 2, supporting a range of platforms, allowing developers to build cleaner code when working with Angular 2's dynamic libraries.

Developers can still choose to write Angular 2 applications in traditional JavaScript, but they will also enjoy the new set of TypeScript new capabilities available to them, which will adopt the key features proposed in Google's AtScript language.

As part of a cooperative effort to improve the developer experience, the Angular and TypeScript teams are working effectively to both converge AtScript with TypeScript and to improve Angular 2 and TypeScript 1.5. They are also working jointly to propose ECMAScript 7 extensions for optional static types and decorators.

Soma speaks...

S. Somasegar is the corporate vice president of the Developer Division at Microsoft. He has explained that three years ago now his team introduced TypeScript to offer compile-time type checking and richer tooling integration.

According to Somasegar, "In addition to the work on the language, we've continued to improve Visual Studio's powerful environment for building TypeScript apps with type-supplemented IntelliSense, go to definition, refactor/rename, project templates to get you started, and integrated build support. If you have Visual Studio 2013 Update 2 or beyond, you have TypeScript already. It's great to see the continued growth in the TypeScript ecosystem, and I'm particularly excited to be partnering with Google's Angular team to align our work on TypeScript and Angular 2."







Red Hat OpenShift Commons: a friendly way to PaaS time

bridgwatera | No Comments
| More

Red Hat wants to deepen engagement with OpenShift, its own open source Platform-as-a-Service (PaaS) offering.

1 red.png

The firm has created OpenShift Commons, an initiative for users, contributors, operators, customers, partners and service providers.

OpenShift Commons is said to be open enough to "embrace other open source technology communities, organisations and ecosystem partners" from a variety of disciplines that intersect with PaaS.

But all must be committed to the open source model and PaaS innovation.

-- among others...

Red Hat launches OpenShift Commons with participation from Amadeus, AppDirect, Cisco, Dell, Docker, GetUp Cloud and Shippable -- among others.

OpenShift by Red Hat incorporates OpenShift Origin, Docker, Google Kubernetes, Project Atomic -- among others.

The OpenShift ecosystem is intended to enable collaboration on the dependencies that can best advance open source PaaS.

"OpenShift Commons operates under a shared goal to move conversations beyond code contribution and explore best practices, use cases, and patterns that work in today's continuous delivery and agile software environments," said Red Hat, in a press statement.

There is no Contributor License Agreement, code contribution requirement or fees to join, just a commitment to collaborate on the new PaaS stack.

Google: native C in Hadoop with MapReduce for C

bridgwatera | No Comments
| More

Google's open source elves have released MapReduce for C (MR4C), an open source framework to run native C-language code in Hadoop.

What?

MR4C.png

MR4C is an open source software "framework" designed to allow software programmers to run their native C and C++ code on Hadoop for its big data analytics capabilities.

MR4C is an implementation framework that allows programmers to run native code within the Hadoop execution framework.

Pairing the performance and flexibility of natively developed algorithms with the "unfettered scalability and throughput" inherent in Hadoop, MR4C enables large-scale deployment of advanced data processing applications.

MapReduce itself can be described as a "programming model" with a parallel distributed algorithm for creating software code that is capable of processing and performing calculations upon (and then ultimately also generating) what are very large data sets.

Why?

Being able to run native code means developers can avoid having to construct additional libraries -- and Hadoop is of course written in Java.

Still ... why?

What kind of code would be so big that this type of set of algorithms and libraries would need to be developed?

Examples include:

• High performance scientific computing
• Satellite image processing
• Industrial data clusters serving Internet of Things
• Geospatial data science

... and finally, why?

What Google wants to do is be able to abstract the details of MapReduce (as a programming model and framework) and so therefore allow developers to create more pure-bred algorithms, which (in theory) will always perform with more power, flexibility and speed.

Google explains that it was attracted by the job tracking and cluster management capabilities of Hadoop for scalable data handling, but wanted to leverage the image processing libraries that have been developed in C and C++.

"While many software companies that deal with large datasets have built proprietary systems to execute native code in MapReduce frameworks, MR4C represents a flexible solution in this space for use and development by the open source community," said Google, on its open source blog.

Blogger Ty Kennedy-Bowdoin continues, "MR4C is developed around a few simple concepts that facilitate moving your native code to Hadoop. Algorithms are stored in native shared objects that access data from the local filesystem or any uniform resource identifier (URI), while input/output datasets, runtime parameters, and any external libraries are configured using JavaScript Object Notation (JSON) files."

Mirantis & Google on Kubernetes & OpenStack

bridgwatera | No Comments
| More

Mirantis is a firm that calls itself a "pure-play" OpenStack company.

323322e3.png

It has a new initiative that integrates Kubernetes with OpenStack.

As reported on Computer Weekly at the above link -- Kubernetes works in conjunction with Docker.

While Docker provides the lifecycle management of containers, Kubernetes takes it to the next level by providing orchestration and managing clusters of containers.

Mirantis says it is letting developers deploy containers on OpenStack pretty fast with this technology.

How?

The integration gives developers immediate to Kubernetes clusters with Docker containers without needing to set up infrastructure.

Developers will be able to move entire environments between OpenStack private clouds and public clouds that support Kubernetes, such as Google Cloud Platform.

"Our development work with Google to combine the power of Kubernetes with Mirantis OpenStack makes it easy for developers to manage Docker containers at scale," said Mirantis CEO, Adrian Ionel.

"Kubernetes automates the management of Docker containers, while OpenStack automates the configuration and deployment of infrastructure resources on which those containers run. By using the two technologies together, developers can focus on creating software because their underlying infrastructure just works."

"Integrating Kubernetes with OpenStack gives developers more choice and flexibility over how they want to build and run applications," said OpenStack Foundation COO, Mark Collier.

"OpenStack's pluggable design allows users to integrate and leverage emerging technologies like containers, while relying on the proven automation engine in OpenStack to handle compute, storage, and networking."

The integration uses Murano, the application catalog project in the OpenStack ecosystem.

Murano automatically configures the compute, storage and networking resources for Kubernetes clusters, and provides integration for OpenStack infrastructure components such as load balancers and firewalls.

Murano can then deploy the Docker application onto the Kubernetes cluster and manage the application's lifecycle. To provide production-grade Kubernetes clusters, Murano adds integration with monitoring and log collection services. This delivers Kubernetes on OpenStack out of the box.

Chef: a recipe for cloud data automation & migration with Microsoft

bridgwatera | No Comments
| More

Chef and Microsoft's Azure team have joined partnered to provide automation platform technology and DevOps expertise.

dcb4cf6ea7d756443727318410ee4f01.jpg

The firms are combining a software application developer proposition that they hope will help automate workloads across on premises datacentres and on Microsoft Azure.

Both Windows and Linux workloads can be moved to Azure (better) with this new agreement.

Chef will provide tools to automate both compute resources and applications.

"IT is shifting from being an infrastructure provider to becoming the innovation engine for the new software economy. Key elements of the new, high-velocity IT include automation, cloud, and DevOps," said Barry Crist, CEO, Chef.

Microsoft developer guru and corp veep for Microsoft Azure Jason Zander says he is excited to extend his team's work with Chef to help customers move their workloads into the Azure cloud.

"Chef and Microsoft will enhance native automation experiences for Azure, Visual Studio and Windows PowerShell DSC users. Microsoft Open Technologies has its own collection of Chef Cookbooks, providing solid code for automating the provisioning and management of compute and storage instances in Azure. 2015 will bring additional deliverables across Windows, Azure, and Visual Studio with a focus on empowering customers to automate heterogeneous workloads and easily migrate them to Azure," said the firm's in a press statement.

Chef says it will deliver "hundreds of hours" of DevOps education in Microsoft's ecosystem across industry events, digital channels and community meetups.

Image credit: http://www.partybell.com/

Google Perfkit: a 'living benchmark' for evaluating cloud

bridgwatera | No Comments
| More

Google has launched PerfKit (perfect software development kit - geddit?) an open-source cloud-benchmarking tool that.

GOOG-50_CloudPlatform_Blogger_Logo.png

What is it?

The company says that it turns out that it's surprisingly difficult to evaluate cloud offerings beyond just looking at price or feature charts.

So, as a result, Google calls this a way to define a canonical set of benchmarks to measure and compare cloud offerings.

NOTE: Canonical in the sense of it being a ruling standard, obviously.

This software toolset supports:

  • Google's Compute Engine,
  • Amazon's AWS and,
  • Microsoft's Azure clouds.

PerfKit is described as a "living benchmark framework" -- big terms indeed, but then this is Google isn't it?

What makes a living benchmark?

PerfKit is designed to evolve as cloud technology changes always measuring the latest workloads so users can make decisions about what's best for their infrastructure needs.

"As new design patterns, tools, and providers emerge, we'll adapt PerfKit to keep it current. It already includes several well-known benchmarks, and covers common cloud workloads that can be executed across multiple cloud providers," said Google, in a statement.

1Perfkit.png

Google has including a set of pre-built dashboards, along with data from actual network performance internal tests.

"This way, you'll be able to play with the PerfKit Explorer without having to first input your data," said the firm.

The source code is released under the ASLv2 license to make it easy to contribute and collaborate and maintain a balanced set of benchmarks.

Google welcomes user participation through Github.







Microsoft: big data analytics for everyone

bridgwatera | No Comments
| More

Microsoft has its Build 2015 software application development conference almost within its sights now -- as such, its programmer portals are currently gleaming like a new START button.

ubuntu_msft_2_18_new.jpg

In related data centric news, the firm is this week announcing enhanced Microsoft data services for Hadoop alongside some related machine learning technologies.

This news comes just 24 hours after HP announced its Haven Predictive Analytics software was fit for operationalising large-scale machine learning.

These new services from Redmond reaffirm that "Microsoft is embracing open source" (the team is saying that a lot) and simplifying Hadoop (every body wants to do that, Hadoop is hard) for simplicity and ease-of-use.

Updates to Azure HDInsight include a public preview of HDInsight on Linux and general availability of Apache Storm for HDInsight.

What is Azure HDInsight?

HDInsight is a cloud distribution of Hadoop that has been architected to handle any amount of data, scaling from terabytes to petabytes on demand.

Microsoft says, "You can spin up any number of nodes at any time -- we only charge for the compute and storage you actually use."

What else is Microsoft announcing?

Hadoop 2.6 support in HDInsight, new virtual machine sizes, the ability to grow/shrink running HDInsight clusters, and a Hadoop connector for DocumentDB.

Microsoft also says it is simplifying machine learning for business with the general availability of Azure Machine Learning.

Why is machine learning a big deal?

As you will know, machine learning is a cousin of data mining in some senses i.e. it allows programs to detect patterns in data and adjust application actions accordingly. Given the growth in web services (you could call them cloud services if you wanted to), the use of machine learning in what are increasingly real time processing environments is on the rise.

Real-time & 'push' analytics

The company also announced a public preview of Azure Mobile Engagement, which is intended to offers 'real-time user' and 'push' analytics.

Why is Microsoft doing this?

Microsoft's goal is to make big data technology simpler and more accessible to the greatest number of people possible i.e. not just big data engineers, data scientists and software application developers, but also IT managers and everyday businesspeople.

Whether that is too much power in the wrong hands will come down to individual customers, but there is an argument to suggest that we need to be careful here.

Microsoft's T. K. "Ranga" Rengarajan, corporate vice president, data platform and Joseph Sirosh, corporate vice president of Machine Learning tell us not to worry and that, "[These new services] can help businesses dramatically improve their performance, enable governments to better serve their citizenry, or accelerate new advancements in science."

Ed -- did he just say "citizenry", that's a bit Fox News isn't it? We'll let it go. What else did he say?

According to Ranga and Sirosh, "Storm for Azure HDInsight, generally available today, is another example of making big data simpler and more accessible. Storm is an open source stream analytics platform that can process millions of data "events" in real time as they are generated by sensors and devices. Using Storm with HDInsight, customers can deploy and manage applications for real-time analytics and Internet-of-Things scenarios in a few minutes with just a few clicks."

Should big data analytics be democratised for everyone?

Well yes, but let's take it slowly please... nobody wants this bubble to burst, not even Microsoft.


Concurrent: 2015 is "show me the money year" for Hadoop

bridgwatera | No Comments
| More

Field engineering veep at Concurrent, Inc Supreet Oberoi says that "2015 is the 'show me the money year' for Hadoop" -- his firm offers an application development platform for big data applications.

The following text is all attributed to Oberoi as a guest post on Computer Weekly's Open Source Inside blog.

Hadoop has been rapidly adopted by enterprises as "the way" to execute big data strategy. However, enterprises must now show ROI by making these data applications operational in order for businesses to make decisions. They can do so by selecting the right platform to develop on Hadoop.

04e120e.jpg

When building applications on Hadoop, enterprises have struggled with choosing between easy-to-use tools and necessary frameworks, but the easy way almost always falls short.

Additionally, ease of development is only a small part of the overall effort to operationalise applications on Hadoop - enterprises need tools to debug, tune, deploy, monitor, govern and provide compliance.

When setting the foundation of a new platform architecture with Hadoop, enterprises must choose an architecture that scales with data, absorbs the complexity in applications, integrates with legacy systems and operationalises developments with minimal effort - all while ensuring the compliance and the governance needs are met. In addition, best practices must be shared and components reused across teams.

Selecting the right platform is not just a technical decision - technology leaders need to develop a path to train existing organisations.

Leveraging only existing skillsets may limit options, and selecting easy-to-use technology may result in an overly simple approach to develop applications that won't meet the needs of the enterprise. As a result, selecting a platform that leverages existing development skill sets, such as Java, that can scale to meet the demands of the enterprise is what is required to show ROI.

This is why 2015 will be the "show me the money" year for Hadoop.

NOTE: Concurrent builds application infrastructure products that are designed to help create, deploy, run and manage data applications at scale on Apache Hadoop.

A new open source big data framework

bridgwatera | No Comments
| More

MapR and Mesosphere are announcing a new open source big data framework (called Myriad) that allows Apache YARN jobs to run alongside other applications and services in enterprise and cloud datacentres.

What is Apache YARN?

Apache Hadoop YARN (Yet Another Resource Negotiator) is a cluster management technology said to fall into the 'second-generation' Hadoop family. YARN has also been called a large-scale enterprise-level distributed operating system for big data applications.

The MapR and Mesosphere initiative was kicked off by a developer at Ebay and turned into a collaborative effort between multiple companies -- the Myriad project now unifies Apache YARN and Apache Mesos resource management.

NOTE: Apache Mesos is a distributed systems kernel that abstracts CPU, memory, storage and other compute resources allowing developers to program against the datacentre like a single pool of resources.

Mesosphere itself is the creator of the Mesosphere Datacenter Operating System (DCOS) for managing datacentre and cloud resources.

MapR Technologies, Inc. is a provider of a well-ranked distribution for Apache Hadoop.

A single pool of resources

Myriad (available on GitHub) is an open source project built on the vision of consolidating big data with other workloads in the datacentre into a single pool of resources for greater utilisation and operational efficiency.

Concurrently, there are plans to submit Myriad as an Apache Incubator project with the Apache Software Foundation in the first quarter of 2015.

Where Hadoop is hard work

To date, Hadoop developers are said to have been "forced to run" big data jobs on dedicated clusters, leaving those resources isolated from other applications and services in production, and typically (says the firms) resulting in poor server utilisation rates.

How Myriad works

Myriad uses both Apache YARN and Apache Mesos, allowing big data workloads to run alongside other applications including long-running Web services, streaming applications (like Storm), build systems, continuous integration tools (like Jenkins), HPC jobs (like MPI), Docker containers, as well as custom scripts and applications.

"Big data developers no longer have to choose between YARN and Mesos for managing clusters," said Florian Leibert, CEO and co-founder of Mesosphere.

"Myriad allows you to run both, and to run all of your big data workloads and distributed applications and systems on a single pool of resources. Big data developers get the best of YARN's power for Hadoop-driven workloads, and Mesos' ability to run any other kind of workload, including non-Hadoop applications like Web applications and other long-running services."

"Myriad enables businesses to tear down the walls between isolated clusters just as Hadoop enables businesses to tear down the walls between data silos," said Jim Scott, director, enterprise strategy and architecture, MapR Technologies. "Developers can now focus on the data and applications which the business depends on, while IT operations can manage compute resources to maximize business agility and minimise operating expenses."


Nice kitty: MongoDB 3.0 (with Tiger Inside)

bridgwatera | No Comments
| More

The open source cross-platform document-oriented database company MongoDB has reached version 3.0 this month.

The new iteration sees significant changes in its storage layer performance and scalability.

800px-Panthera_tigris_-Franklin_Park_Zoo,_Massachusetts,_USA-8a_(2).jpg

No major surprise, the firm last December acquired WiredTiger for its database storage engine technology.

According to Eliot Horowitz, CTO and Co-founder of MongoDB, the technology provides an easy to use, high-level layer for application development, durability, and horizontal scale, while allowing lower-level storage engines to offer solutions engineered for specialised use cases.

Nice kitty, there there

The WiredTiger storage engine was built with "latch-free non-blocking algorithms" and this is what makes it so nice.

The freedom of latch-free non-blocking algorithms gives a database technology like Mongo DB the ability to take advantage hardware developments like large on-chip caches and heavily threaded architectures.

Programming purist Kay Ewbank writes on I-Programmer saying that the new storage engine has enabled the developers to introduce document-level concurrency control, which they (Mongo DB) say means performance remains fast and predictable under concurrent, write-intensive workloads.

"Transparent on-disk compression has also been added, reducing storage requirements by up to 80%, and a choice of compression algorithms means you can choose one that offers the best performance/space trade-off to suit the needs of particular components in your applications," wrote Ewbank.

Free under GNU

Available for free under the GNU Affero General Public License, MongoDB is claimed to be among the world's fastest growing databases.

The company (MongoDB) sells additional supplementary services on top of the free software including: advanced software elements, production and development support, certifications, MongoDB Management Service (MMS) for cloud environments, as well as consulting and training.

Microsoft open sources .NET 'execution engine'

bridgwatera | No Comments
| More

Microsoft has released the open source code for CoreCLR to GitHub.

CLR denotes: Core Common Language Runtime (CLR)

6746.image_3B14CB15.png

This is the "execution engine" for .NET Core -- a fork (and modular implementation of) the .NET framework itself.

An execution engine is used for performing functions such as garbage collection and compilation to machine code.

Microsoft lauds this as "another big step" on the path to open sourcing the full .NET Core server-side stack.

This release adds to the .NET CoreFX (released on November 12th) and is now available through a .NET CoreCLR repository on GitHub, where Microsoft will be watching the Issue and Pull Request queues for input from the community.

The execution engine here includes RyuJIT, the .NET GC, native interop and many other .NET runtime components.

When you run ASP.NET 5 apps on top of .NET Core, CoreCLR is the component that is responsible for executing your code, in addition to the CoreFX/BCL libraries that you depend on.

According to Microsoft, "At the moment, the .NET Core console app type is a useful byproduct of our engineering process. Over the next few months, we will be shaping it into a fully supported app type, including Visual Studio templates and debugging. We'll also make sure that there is good OmniSharp support for console apps. We believe that many of you will build console tools for Windows, Linux and Mac. You'll also be able to build tools that are cross-platform, that run on all 3 OSes, with just one binary."

1538.Repo.png-550x0.png

Microsoft also announced a virtual .NET Conference on March 18 & 19 where developers can expect to hear more updates on the progress with .NET Core -- especially useful for those normally excluded from other Microsoft developer events.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Recent Comments

  • Gail Smith Gail Smith: You're talking about open source but then promote LogEntries which read more
  • john doe: It seems that the experts disagree that this laptop is read more
  • Cliff Nelson: sounds reasonable.... same concept used for Big Data NOSQL records read more
  • mike kelley: I sticked on my laptops as a person who is read more
  • Hammer Jones: HTML5 is really a fantastic technology. The faster the major read more
  • S Ten: Why do Microsoft still refuse to open source the VB6 read more
  • J Shree: WFT Cloud are extremely sophisticated in terms of their service read more
  • Adrian Bridgwater: Brent Smithurst, VP of product management at ActiveState comments: A read more
  • Steve Ten: The VB6 programming language runs on the Windows 10 technical read more
  • Kshitij Hastu: Here's an open letter to the President of the International read more

Recent Assets

  • 10389198_706096326125725_5798370725412548092_n.jpg
  • 15173_602782266457132_8288391834731907204_n.jpg
  • 1 lulue.jpg
  • Bear Grylls (Photo courtesy of Bear Grylls).jpg
  • 8726187t812.png
  • 1efihweoifhw.png
  • RoboVM-Press-Release-nologo.png
  • hitchhikers_guide_trillian.jpg
  • 1184.clip_image0014_thumb_01B04C26.png
  • ensure-availability.png

-- Advertisement --