NodeConf EU all set for blarney in 'Nodeland'

bridgwatera | No Comments
| More

It's NodeConf EU time again -- the third annual gathering of what is hoped to be 400 of the top influencers in Node.js at Waterford Castle from September 6th to 9th.

NOTE:Node.js is a platform built on Chrome's JavaScript runtime for building network applications and is very popular in the areas of robotics, embedded applications and (hence, logically) the Internet of Things.

Cheesy, but nice

For the duration of the event the private island surrounding the historic Waterford Castle will be renamed 'Nodeland'.

roadshow-promo.png


The event features speakers from the Node Foundation for the first time since the merger of Node.js and io.js under the last May.

Node Foundation members and community leaders such as Todd Moore, IBM director for Open Technology and Partnerships, Danese Cooper, head of open source for PayPal, Gianugo Rabellino, senior director of Open Source Communities at Microsoft and Brian McCallister, CTO of Platform at GroupOn will be speaking at the event. Mikeal Rogers will also be present and is the originator of NodeConf in the U.S with a number of other top speakers.

The audience will be comprised of C-level executives, Node.js developers at all skill levels, tech managers with Node.js teams and technology thought-leaders and visionaries.

As in previous years NodeConf EU will be curated by nearForm, now the world's largest Node.js consultancy, whose CEO Cian O'Maidin is the only European member of the Node Foundation.

Ireland's oldest city

Waterford is Ireland's oldest city, founded by Viking raiders in AD 914. It has played a pivotal role in the economic, political and cultural life of Ireland and developing into Node.js centre of excellence.

According to the organisation, "NodeConf this year will be bigger and better than ever with delegates treated to their own Node.js powered robotic bartender that will prepare a cocktail in two minutes."

Other features at the conference will be a lavish opening ceremony with a flagbearer on horseback, a Speigel tent, live music, traditional whiskey tasting, dueling singing waiters, archery and falconry.

Hortonworks: how to pin down data inside the 'Internet of Anything'

bridgwatera | No Comments
| More

News this week sees Hortonworks finalise an agreement to acquire Onyara, Inc.

The rationale here is: Hortonworks as an open enterprise Hadoop company scooping up the creator of and key contributor to Apache NiFi.

NOTE: Apache NiFi supports scalable directed graphs of data routing, transformation and system mediation logic i.e. core 'highway control' for dataflow management inside IT systems.

Collect, conduct and curate

Hortonworks is hoping to make it easier to automate and secure data flows and to collect, conduct and curate real-time business insights and actions derived from data in motion.

As a result of the acquisition, Hortonworks is introducing Hortonworks DataFlow powered by Apache NiFi which is complementary to its own Open enterprise Hadoop platform, Hortonworks Data Platform (HDP).

What is the Internet of Anything?

This is the suggestion that this is a 'new data paradigm' that includes data from machines, sensors, geo-location devices, social feeds, clickstreams, server logs and more.

Many IoAT applications need two way connections and security from the edge to the datacentre -- this results in a 'jagged edge' that increases the need for security but also data protection, governance and provenance.

These applications also need access to both data in-motion and data at-rest.

While the majority of today's solutions are custom-built, loosely secured, difficult to manage and not integrated, Hortonworks DataFlow powered by Apache NiFi will simplify and accelerate the flow of data in motion into HDP for full fidelity analytics. Combined with HDP, these complementary offerings will give customers a holistic set of secure solutions to manage and find value in the increasing volume of streaming IoAT data.

What is NiFi?

Apache NiFi was made available through the NSA Technology Transfer Program in the fall of 2014.

In July 2015, NiFi became a Top-Level Project, signifying that its community and technology have been successfully governed under the Apache Software Foundation.

"Nearly a decade ago when IoAT began to emerge, we saw an opportunity to harness the massive new data types from people, places and things, and deliver it to businesses in a uniquely secure and simple way," said Joe Witt, chief technology officer at Onyara. "We look forward to joining the Hortonworks team and continuing to work with the Apache community to advance NiFi."

1 Hortonworks DataFlow FINAL.JPEG

Apache Twill: real abstraction is a decoupled algorithm

bridgwatera | No Comments
| More

Cloud computing is a 'decoupled' thing.

ApacheTwill_logo.png

To be clearer, this term decoupling arises time & time again in relation to the cloud computing model of service-based processing and storage power.

Two senses of mobile

Decoupling is a good emotive term that transcends previous pre-cloud notions of mere networking to provide us with a new notion of a computing layer where applications and their dependent resources can be set free for a more mobile (in the interchangeable sense AND in the smartphone sense) existence.

But this is superficial decoupling (actually it's not, but we're making a point here... so go with it for now), deeper decoupling occurs when we start to look down into the substrate.

Deeper decoupling involves disconnecting individual management layers, computing platforms and processing engines from their core algorithmic kin.

Apache Twill is an abstraction layer that sits over Apache Hadoop YARN (the clustering and resource manager) that reduces the complexity of developing distributed applications -- it does this by decoupling Hadoop itself from the MapReduce algorithm.

This action is designed to allowing developers to focus more on their application logic.

Hadoop is then decoupled to be able to run with other processing engines such as Spark for example.

It's like threads

The Apache Twill project team explains that this technology allows programmers to use YARN's distributed capabilities with a programming model that is similar to running 'threads' i.e. separated-out dependent streams of logic that can exist on their own.

While YARN is extremely complex technology, Twill aims to make this easier to pick up programmatically.

According to the development team, Apache Twill dramatically simplifies and reduces development efforts, enabling you to quickly and easily develop and manage distributed applications through its simple abstraction layer on top of YARN.

Its like distributed is good, decoupled distributed is really good -- but abstracted decoupled distributed is even better.

http://twill.incubator.apache.org/

How software acceleration works inside DevOps

bridgwatera | No Comments
| More

Software application development can be accelerated.

More specifically, software acceleration technologies exist to pump extra optimisation and performance out of existing tools -- although in this category we often primarily talk about GPU-based acceleration, it has to be said.

1ibImg0.png

The immodestly titled IncrediBuild is a provider of software development acceleration technology.

It's tool for Linux and Android is described as a suite of of out-of-the-box acceleration solutions enabling developers to accelerate their software development up to 30x faster.

"IncrediBuild for Linux and Android also allows developers to visualise their build process with no vendor lock-in or need to change their development toolchain or workflow," says the firm.

What it ACTUALLY does

Essentially this software is supposed to save time usually spent waiting for build, testing, packaging(or other development processes) as part of the Continuous Integration and Continuous Delivery process.

This type of software is hoped to streamline development cycles by running development processes in a distributed fashion -- that's what the firm really means by software acceleration.

IncrediBuild uses a 'Docker-like' proprietary distributed container technology to enable fast processing of development tasks in parallel, allowing developers to turn their computer into a virtual supercomputer by harnessing idle cores from remote machines across the network and in the cloud, increasing performance, speeding build time, and improving developer productivity.

IncrediBuild for Linux and Android supports the most popular Linux distributions, such as Ubuntu, Red Hat Enterprise Linux, and CentOS.

It can accelerate most built tools and various development tools without any integration, changes to source code, or build tools environment.

The DevOps factor

"Being able to directly visually audit the build process to look for bottlenecks while reducing execution time with IncrediBuild significantly speeds up our ability to deliver innovative solutions to customers," said Richard Trotter, DevOps support engineer at GeoTeric.

IncrediBuild allow a developer to record and replay the entire build process execution and provide an intuitive real time monitoring to build execution in a very easy to use graphical representation providing statistics and reporting.

It allows a developer to drill into low-level data as well as monitor their build health, identify and detect key errors and detect build dependencies, anomalies, and inefficiencies. It is the only solution that provides a centralised area to inspect previous builds, replay and analyse them.

This helps enable and optimise Continues Delivery as well as aid regulatory compliance. Until now the only way Linux developers could analyse what was going on as they compiled their software was to rely on the command line and long textual output.

Microsoft still gives an F#

bridgwatera | No Comments
| More

Microsoft Research was founded in 1991 and is the company's division dedicated to conducting both basic and applied research in computer science and software engineering.

1 fuyfuyfuyf.png

The 'group' has this month increased its development efforts directed towards F#.

F# (the hashtag denoting sharp, obviously) is an open source cross-platform, functional-first programming language.

Where the F#?

F# runs on Linux, Mac OS X, Android, iOS, Windows, GPUs and browsers.

The language now stands to benefit from a second generation of tools specifically developed for software application development pros to use in conjunction with the Visual Studio IDE.

Visual F# Power Tools

"The goal of the extension is to complement the standard Visual Studio F# tooling by adding missing features, such as semantic highlighting, rename refactoring, find all references [capabilities], metadata-as-source [functionality] etc.," said Anh-Dung Phan and Vasily Kirichenko -- both of whom are F# community developers.

The pair also state that what's particularly special about this project is that it's a collective effort of the F# open source community.

They explain that they work alongside the Visual F# Team at Microsoft in order to provide a complete toolset for F# users in Visual Studio.

You can read more technical details on the .NET blog here.







IBM LinuxONE is a Linux-only mainframe

bridgwatera | No Comments
| More

IBM has introduced two Linux mainframe servers under the brand name LinuxONE.

1 IBM leidheiwugdf.png

The machines can perform 30 billion RESTful web interactions/day with Dockerized Node.js and MongoDB, driving over 470K database read and writes per second.

The company says it will now also enable open source and industry tools and software including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Docker on its z Systems.

SUSE (which provides Linux distribution for the mainframe) will now support KVM, thereby providing a new hypervisor option.

Canonical and IBM also announced plans to create an Ubuntu distribution for LinuxONE and z Systems. The collaboration with Canonical brings Ubuntu's scale-out and cloud expertise to the IBM z Systems platform.

"Fifteen years ago IBM surprised the industry by putting Linux on the mainframe, and today more than a third of IBM mainframe clients are running Linux," said Tom Rosamilia, senior vice president, IBM Systems.

"We are deepening our commitment to the open source community by combining the best of the open world with the most advanced system in the world in order to help clients embrace new mobile and hybrid cloud workloads. Building on the success of Linux on the mainframe, we continue to push the limits beyond the capabilities of commodity servers that are not designed for security and performance at extreme scale."



Fraud detection -- in real time

The system is capable of analysing transactions in "real time" and can be used to help prevent fraud as it is occurring.

A key part of IBM's latest mainframe code contributions are IT predictive analytics that constantly monitor for unusual system behaviour.

The code can be used by developers to build similar sense and respond resiliency capabilities on other systems.

The contributions will help fuel the new "Open Mainframe Project," formed by the Linux Foundation.

These latest products from IBM can scale up to 8,000 virtual machines or hundreds of thousands of containers - currently the most of any single Linux system.

The humongous MongoDB factor

In line with this news, MongoDB says it has deepened its partnership with IBM, announcing plans to offer support for its own products on IBM z Systems mainframe.

"MongoDB has become the world's fastest growing database by enabling organisations to effectively capitalise on the power of modern applications and data to gain a competitive advantage," said Dev Ittycheria, president and CEO, MongoDB.

"For years, the world's largest companies have run critical applications on IBM mainframes. Our move to support IBM z Systems is a testament to our commitment to our users and customers to make MongoDB available on all major platforms. With this announcement, organisations can now build and run modern, mission-critical applications on proven mainframe technologies."

MongoDB confirms that it is working closely with IBM to engineer MongoDB Enterprise Server to be optimised for Linux on z Systems and the new LinuxONE Systems.

As part of the agreement, MongoDB's global support and engineering organisation will continue to collaborate with IBM to ensure business continuity for our joint customers running MongoDB on IBM z Systems.

IBM's X-Force saves Earth from Android('s vulnerabilities)

bridgwatera | No Comments
| More

Researchers from IBM's X-Force security division say they have discovered a number of high-severity vulnerabilities affecting more than 55% of Android devices.

1iugiduwgd.png

These vulnerabilities, both on the Android platform itself and in third party Android Software Development Kits (SDKs,) can potentially be exploited by hackers to give a malicious app with no privileges the ability to gain unauthorised access to information and other functionalities on the device.

Ponemon -- gotta survey 'em all

Those who give credence to Ponemon research studies may find some interest in suggestions from the organisation that firms spend an average of $34 million annually on mobile app development, but only 5.5% of this spend is dedicated to 'in app' security.

It is claimed that 50% of those companies devoted no budget at all to securing the apps they developed.

The vulnerabilities revealed by IBM centre on the Android platform OpenSSLX509Certificate class, which is one of many classes developers leverage to add functionality to apps such as network access and the phone's camera - much like the news from last week's Black Hat conference which underlined webcams as highly vulnerable.

What can happen?

By introducing malware into the communication channel between the apps and phone functionalities, attackers are able to:

· Take over an application on a user's device and perform actions on behalf of the victim. (i.e. take photos, share content, send messages, etc - depending on the app)

· Replace real apps with fake ones filled with malware that can collect personal information. (i.e. replace Facebook with a fake version that collects your information on the social network)

· Steal sensitive information from the attacked app. (i.e. steal confidential banking information from a banking app or login credentials for different accounts)

Google as well as the vulnerable SDKs have been patched, however, IBM Security recommends that all users make sure they have downloaded the latest version of Android and have updated SDKs. If you would like anything else on this news, please just let me know.

Pixar open sources Finding Nemo... (digital content software)

bridgwatera | No Comments
| More

Animation house Pixar will now open source its Universal Scene Description software.

889958-nemo_4.jpg

The company behind Finding Nemo, Toy Story, Monsters Ins., Cars and the Incredibles has made this move to embrace the more open methods by which animation data is processed in the current age.

Universal Scene Description (USD) software helps handle the creation and ongoing maintenance of extremely big graphics-intensive scenes.

Pixar, founded in 1983, has been working with this software technique for more than 20 years.

"One of the key aspects of Pixar's pipeline is the ability for hundreds of artists to operate simultaneously on the same collections of assets in different contexts, using separate 'layers' of data that are composited together at various production stages," commented Guido Quaroni, VP of software R&D at Pixar.

ACRONYM NOTE: DCC stands for Digital Content Creation.

"USD generalises these concepts in an attempt to make them available to any DCC application," he added.

Couchbase CEO: How to understand benchmarks, from BS to belief

bridgwatera | 1 Comment
| More

This is a guest post for the Computer Weekly Developer Network blog by Bob Wiederhold, CEO Couchbase.

Couchbase is company known for its open-source distributed NoSQL document-oriented database that is optimised for interactive applications

Wiederhold writes in light of recent NoSQL industry benchmarks comparing flagship products, which it has to be said... have been met with contrasting opinions.

Gb4nYWbi_400x400.jpg

So how do we know what to believe?

What benchmarks should be like

Benchmark tests may raise questions, however it's essential that each report is open, reproducible and is not over-engineered to favour one solution over another.

Under these circumstances, competitive benchmarks are designed to provide valuable information to developers and ops engineers who are evaluating various tools and solutions.

More NoSQL usage necessitates more testing

The release of an increasing number of benchmarks isn't surprising.During early phases of NoSQL adoption, benchmarks were somewhat less important because most users were experimenting with NoSQL or using it on lightweight applications that operated at small scale.

Since 2013, we've entered a different phase of NoSQL adoption, where appetite has grown, and organisations are deploying NoSQL for mission-critical applications operating at significant scale.

The use of benchmarks is increasing because performance at scale is critical for most of these applications.

Developers and ops engineers need to know which products perform best for their specific use cases and workloads.

Different benchmarks: different use cases

It's entirely legitimate for benchmarks to focus on use cases and workloads that align with the target market and 'sweet spots' of the vendor's products.

(CWDN Ed -- this is Wiederhold's 'money shot' killer line playing for validation isn't it? The point is (relatively) impartially made and at least he is being candid enough to say it out loud. Doesn't (quite) make it alright, but nearly. Let's allow the gentleman to finish...

That doesn't make them invalid, it just points out the importance of highlighting what those use cases and workloads are so developers and ops engineers can assess whether the benchmark is applicable to their specific situations.

Keeping It fair

To be useful, however, benchmarks need to be fair, transparent and open. Otherwise, they're of little value to anyone, let alone the developers and engineers who depend on them to make an informed decision.

Vendors may complain that a benchmark isn't fair because it's focused on a use case and workload that's not a sweet spot for them.

Those aren't valid complaints. On the other hand, benchmarks need to make every effort to achieve an apples-to-apples comparison and, for example, use the most recent software versions.

These comparisons can be difficult, because the architectures and operational setups of each product are so different, but significant effort should be made to achieve this. Using the right version of software should be very easy to achieve and should promptly be fixed when it isn't.

Keeping it transparent

Transparency implies at least two things:

(1) Clearly communicating the use cases and workloads that are being measured, and
(2) making the benchmarks open so others can reproduce them, verify the results, and modify them to align more closely with the specific use cases they care about.

A sign of NoSQL growth and adoption?

Vendors will continue to sponsor and publish benchmarks, and they'll continue to gear them toward the use cases the vendor supports best.

All of this is just another indicator of the rising importance of NoSQL, which is growing fast. According to a recent report from Allied Market Research, the global NoSQL market is expected to reach $4.2 billion by 2020 - an annual growth rate of 35.1% from 2014 through 2020. When done fairly and transparently, competitive benchmarks can help enterprises choose the right product for their particular set of requirements.

Couchbase is very focused on supporting enterprise-class mission-critical applications that operate at significant scale with mixed read/write workloads. As a result, our benchmarks run on clusters with many servers and reflect those workloads.

We have recently seen some benchmarks focused on supporting applications that operate at much smaller scale and therefore tested with a small amount of data running on a single server.

Both are valid, but for completely different situations and users.

All software is inadequate, who knew?

bridgwatera | No Comments
| More

If you find yourself in need of some open platform inspiration on any given day, then opensource.com itself is always worth a look in.

1 doggugig.png

The site itself is 'supported by' by Red Hat, but (as they say) the opinions expressed on the website are those of each author, not of the author's employer or of Red Hat.

In a recent story by Jason van Gumster entitled 'Eating your own dog food in open source', the graphics & animation specialist talks about his challenges working with various pieces of software.

He then comes up with a 'sucker punch' like that really sums up how we should feel about software.

A killer line

"[So], really, all software is inadequate," writes van Gumster.

He continues by saying that he doesn't think he knows a single artist or designer who's 100% satisfied with the way his or her tools work.

"You pick the inadequacies that you're willing to live with or work around. The great thing about open source software is that we have recourse. The source is there. Shortcomings can be overcome with time and effort... and perhaps sweet-talking a developer or two for sufficiently complex issues," he writes.

Zen & the art of software maintenance

self_nostache2-bw.png

Much has been said in very recent times about the relative worth of open tools -- and indeed, the security and efficacy of the software itself.

As van Gumster points out, the knee-jerk response is that open source tools simply aren't as capable as their proprietary counterparts -- but if that is the case, then we should work to fix them...

... isn't that the whole point once we accept that all software is inadequate anyway?







Facebook 'LIKES' Google FlatBuffers, ups performance on Android

bridgwatera | No Comments
| More

Facebook has turned to Google open source project to try and tune its performance on Android devices.
fpl_logo_small.png
The social networking company has plugged into FlatBuffers.

Cross platform 'serialization'

FlatBuffers is a cross-platform 'serialization' library for C++ with support for Java, C# and Go.

Serialization is important because it is the process through which data is deconstructed, reformatted, unpacked, parsed and/or reconstituted in order for it to be more easily used in other locations.

FlatBuffers itself was created at Google specifically for game development and other performance-critical applications.

The technology in this case sets out to try and bridge and connect mismatches that exist in the space between compute processors and the memory subsystem of the device in use.

1fb oihiug.png

What Facebook says

Facebook reminds us that people can keep up with their family and friends through reading status updates and viewing photos -- okay, we knew that part.

... and says the firm, in its backend, it stores all the data that makes up the social graph of these connections.

But says Facebook, on mobile clients, "We can't download the entire graph, so we download a node and some of its connections as a local tree structure."

"In our exploration of alternate formats [for the challenges in hand here], we came across FlatBuffers, an open source project from Google. FlatBuffers is an evolution of protocol buffers that includes object metadata, allowing direct access to individual subcomponents of the data without having to deserialize the entire object (in this case, a tree) up front."

IBM pumps extra DBaaS juice into Bluemix

bridgwatera | No Comments
| More

IBM buys other companies, it's just the way things are.
smurf.jpg
This month we see the firm scoop up a private outfit known as 'Compose' -- a database-as-a-service (DBaaS) company.

What is DBaaS?

DBaaS provides a flexible (cloud scalable) database platform orientated toward self-service and management, particularly in terms of provisioning a business' own environment.

DBaaS products typically provide enough monitoring capabilities to track performance and usage and to alert users to potential issues -- the products can also generate at least some degree of data analytics.

IBM has said that the purchase of Compose goes in line with its plans to commit to:

• developer needs
• open source
• cloud services
• above all... managed production-ready on-demand database technologies

NOTE: IBM predicts the cloud database space be worth $14 billion by 2019.

The firm itself (Compose, not IBM) specialises in auto-scaling technologies for database and data services operations -- it supports five open source databases:

• Redis,
• MongoDB,
• PostgreSQL
• Elasticsearch
• RethinkDB.

"Compose's breadth of database offerings will expand IBM's Bluemix platform for the many app developers seeking production-ready databases built on open source," said Derek Schoettle, general manager of IBM Cloud Data Services.

"Compose furthers IBM's commitment to ensuring developers have access to the right tools for the job by offering the broadest set of DBaaS service and the flexibility of hybrid cloud deployment."

Image credit: weloveblue

Intel with Rackspace: 'Cloud for All' developers, developers, OpenStack developers

bridgwatera | No Comments
| More

Intel has announced its Cloud for All initiative, a programmer designed to drive all types of cloud adoption through easier deployments.

intel.web.256.144.png

The firm hopes to 'unleash' tens of thousands of new cloud deployments carrying new digital services.

The cloud gap

Failure to adopt cloud computing has been variously mooted to be down to reasons including:

• complexity,
• security concerns,
• (bizarrely enough) a perceived lack of scalability,
• gaps in open source enterprise-grade features.

"The cloud has been critical to the digital services economy and has enabled tremendous innovation and business growth, but broad enterprise adoption is not happening fast enough," said Diane Bryant, senior vice president and general manager of Intel's Data Center Group.

Intel says that consumer services from major cloud service providers have driven the first wave of cloud adoption, accounting for 75 percent of current cloud usage.

Cloud's next growth targets

Intel thinks that the next wave of cloud growth will come from:

• Internet of Things
• New big data analytics solutions

The Intel Cloud for All initiative will focus on ecosystem investments to accelerate software defined infrastructure (SDI) solutions; the optimization of these SDI solutions to deliver highly efficient clouds across a range of workloads; and aligning the industry and engaging the community through open industry standards,

Intel Inside

Crucially, Intel says it will help make these things happen by virtue of (and taking full advantage of) Intel platform capabilities.

As a key part of this initiative, Intel is also collaborating with Rackspace in its capacity as the co-founder and leading operator of OpenStack.

Intel and Rackspace will establish the OpenStack Innovation Center to focus on driving enterprise features and scale optimisations into the OpenStack source code.

Rackspace: 99.99% four-nines availability

"We are excited to collaborate with Intel and look forward to working with the OpenStack community to make the world's leading open-source cloud operating system even stronger," said Scott Crenshaw, senior vice president of product and strategy at Rackspace.

"We don't believe in creating proprietary OpenStack distributions. Rackspace delivers its customers [with what we call] four-nines availability using entirely upstream trunk code. All of the Innovation Center's contributions will be made available freely, to everyone."

Developers, developers, OpenStack developers

The OpenStack Innovation Center will house the world's largest OpenStack developer cloud consisting of two 1,000-node clusters that will be available to the OpenStack community-at-large to support advanced, large-scale testing of OpenStack performance, code and new features. These testing clusters are expected to be available within the next six months.

The companies will also focus on the delivery of new enterprise features and optimisations that are aligned with the OpenStack Enterprise Working Group and community priorities.

New modules of courseware will also be offered to onboard and increase the number of open source developers actively contributing to the success of the community.

Fanatical elasticity: ObjectRocket by Rackspace adds managed Elasticsearch

bridgwatera | No Comments
| More

The company known for its 'fanatical' approach to managed cloud services support, Rackspace, has added managed Elasticsearch technology to its cadre.

More specifically, Rackspace's managed database platform ObjectRocket is expanding its database service portfolio to include fully-managed instances of Elasticsearch.

HsaYBN7w.jpeg

Elasticsearch itself is an open source distributed real-time full-text search engine based on Apache Lucene.

Rackspace insists that it is able to provide what it calls a "performant, highly-available and scalable platform" for Elasticsearch.

For developers...

What this means is that software application developers can potentially deploy full-text search capabilities (within minutes) for new (and existing) applications supported by MongoDB, Hadoop, MySQL and... other databases are available.

"Adding ObjectRocket for Elasticsearch allows Rackspace customers to search across massive amounts of data and extract key insights in real-time," said Kenny Gorman, chief technologist, data at Rackspace.

"Our team of database experts is now fully trained and able to quickly set-up Elasticsearch so customers can run powerful searches in minutes to inform mission-critical business decisions," he said.

It is true, today require the ability to store, access and analyse petabytes of data from a variety of structured and unstructured sources...

...this in and of itself requires search capabilities for quick ad-hoc data discovery.

According to Rackspace, ObjectRocket for Elasticsearch combines the enterprise-grade performance and scalability of ObjectRocket with management and support from Rackspace specialists for Elasticsearch.

The integrated approach facilitates rapid full-text search and analytics with pre-existing data from MongoDB and other databases such as Hadoop, MySQL and Postgres or new data sources.

Try before you buy

To give DBAs and developers hands-on experience pairing the capabilities of Elasticsearch with their databases and data platforms, ObjectRocket is offering a free service for 30 days for a two data node, 256MB RAM and 2GB Disk instance.

"Most businesses utilse multiple types of databases to meet the specific needs of modern applications, but this diversity can bring complexity," said Nik Rouda, senior analyst at Enterprise Strategy Group.

"Standardising on a bullet-proof, cloud-based infrastructure can simplify delivery without compromising quality. Rackspace has built a versatile yet tailored 'polyglot' platform to satisfy the most demanding requirements."

Rackspace is telling us that with the addition of Elasticsearch, the firm continues to expand the breadth and capability of its portfolio of managed databases, including Elasticsearch, Hadoop, Spark, MongoDB, Redis, Oracle, Microsoft SQL Server, MySQL, Percona, and MariaDB.

Customers have the flexibility to deploy across private, public, bare metal and hybrid clouds with options to automate and reduce the time and money needed to scale, manage and help ensure the availability of production database applications.

CloudBees Jenkins plugs in (deeper) to Google Kubernetes

bridgwatera | No Comments
| More

OSCON is staged this week in Portland, Oregon (as if there were another Portland), USA.

The event is one of the highest profile gatherings of open source software, architecture, frameworks and tools for software application development engineers.

plan_photo_1347191970.jpg

The year sees enterprise Jenkins company CloudBees announce delivery of three Kubernetes plugins to assist in the continuous delivery of containerised applications with Jenkins.

Jenkins CI is a piece of software (written in Java) that exists as an open-source 'continuous integration' server.

As a Google technology, Kubernetes is a software tool to manage Google-scale containerised application workloads in a cloud/clustered environment.

koo-ber-nay'-tace -- Definition: steersman, helmsman, sailing master

Kubernetes handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users' declared intentions. Using the concepts of "labels" and "pods", it groups the containers (which make up an application) into logical units for easy management and discovery.

Related, but distributed

Essentially, Kubernetes aims to provide orchestration and support services for teams to work with related (but distributed components) across varied infrastructures.

The new Kubernetes plugins are supposed to allow DevOps teams operating in massively distributed environments to deliver faster with Jenkins and continuous delivery practices.

The ability to handle containerized applications so easily also (in theory) accelerates the adoption of Docker for the next generation of microservices-based applications.

"The strong integration that the Jenkins community previously delivered for Docker is not only useful for users but also provides a powerful foundation for other container technologies; that is how we delivered Kubernetes support quickly. The easy extensibility that Jenkins offers puts Jenkins ahead of the curve when it comes to support for the overall container ecosystem," said Kohsuke Kawaguchi, Jenkins founder and CTO at CloudBees.

The specific plugins announced by the Jenkins CI community include the following:

● Kubernetes Plugin - Run Jenkins slaves elastically within a Kubernetes cluster of Docker containers.

● Docker Build and Publish Plugin - Prepare Docker images and push them to a Docker registry. With help from the Google Container Registry Auth plugin, it can be used to push to the Google Container Registry in a Kubernetes cluster.

● Docker Custom Build Environment - Allow developers to define custom build environments with Docker containers running within a Kubernetes cluster. The plugin can now pull in Docker containers from the Google Container Registry







RealVNC: more open remote access protocols will increase security

bridgwatera | No Comments
| More

RealVNC, the pioneer of VNC, has opened up its technology for hackers and developers to scrutinise -- but what is VNC?

1dweiudgiwqugdi.png

VNC remote access and control software enables users to remotely access and control devices from anywhere -- VNC technology facilitates remote access from one device to another over a local area network, VPN or the Internet.

The company has now published some info on its RFB 5 protocol.

What is RFB 5?

This is the technology behind VNC i.e. it's the stuff that enables remote access.

I prefer their first album.

Yes but RFB 5 is new... and it's a closed, secret, previously unpublished protocol (unlike earlier RFB 3.x versions).

Hmm, still doesn't sound very secure.

Security in remote access solutions will always be a concern for some it's true.

"There have recently been issues caused by errors such as people not passwording their VNC connections (something that is impossible with RealVNC's own products). As a result, RealVNC wants to make sure its tech is as secure as possible, so are opening their trademark product up for scrutiny," said the company, in a press statement.

The firm's own blog on the news reads as follows:

By the time we released the first commercial version of VNC the internet had become a much less trustworthy place, and its users far more cautious. The RFB protocol evolved accordingly, and RFB 4.x brought secure connections, improved authentication and various other security measures designed for making direct connections across an increasingly hostile Internet.

More recently, the rise of cloud computing has introduced its own security challenges, and RFB has evolved again. The new RFB 5 protocol has been designed alongside VNC Cloud to be our most secure version yet. It builds in features such as Perfect Forward Secrecy, and makes it easier to check you're connected to the right person. It's used for every connection made using the VNC SDK, and will be making its way into our other products soon.

CenturyLink shows some 'guts', open source gifts for Docker, Chef & vSphere

bridgwatera | No Comments
| More

CenturyLink has contributed three of its own technology projects to the open-source community with the intention of improving the way developers use Docker, Chef and vSphere technologies.

1 efhwuiehfw.png

The firm is a communications, hosting, cloud and IT services company

The projects are:

Chef provisioning driver for vSphere simplifies the process of provisioning Chef nodes on VMware vSphere infrastructure.

Lorry.io, a tool for creating, composing and validating Docker Compose YAML files -- this makes it easier to share and deploy entire applications composed of Docker containers.

ImageLayers.io enables developers to visualise Docker images and the layers that compose them, see how each command in the Dockerfile contributes to the final image and compare multiple Docker images side-by-side.

"The embrace of open-source technologies within the enterprise continues to rise, and we are proud to be huge open-source advocates and contributors at CenturyLink," said Jared Wray, senior vice president of platforms at CenturyLink.

The firm's other open-source contributions include Panamax, a Docker management platform; Dray, used to manage Docker workflows; a Cloud Total Cost of Ownership tool; and a Cloud Cost Estimator tool for CenturyLink Cloud.

Linux SWAT team busts a multi-scanning move on malware nasties

bridgwatera | No Comments
| More

Infrastructure security company OpSWAT this month releases a new version of its Metascan product for Linux deployments.

police-swat.jpg

The software is a 'multi scanning malware detection' tool.

OpSWAT (Operations + Special Weapons And Tactics, get it?) confirms Metascan for Linux 64-bit supports:

  • Debian,
  • Red Hat Enterprise Linux,
  • CentOS and
  • Ubuntu

NOTE: The product also works in Windows environments.

The product also provides load balancing for high-volume scanning and can be used in high-availability deployments.

Scan workflows allows different scan options by file type and source to offer increased security and threat prevention.

From Yun-Fong Loh to Stange

"After trying the Metascan for Linux technology preview, I have to say I love it," commented Yun-Fong Loh, senior engineering manager at Edgewater Networks.

"We are excited about the Metascan for Linux release," said Szilard Stange, director of product management at OpSWAT.

How it works

Metascan can scan files with multiple anti-malware engines to detect and block advanced threats.

The software uses multiple anti-malware engines from vendors like Bitdefender, ESET, Threat Track etc. -- in doing so, the company claims that it increases detection rates for all types of malware without the hassle of licensing and maintaining multiple antivirus engines.

Metascan's document sanitisation technology allows for the removal of unknown threats that may be missed by anti-malware engines -- an evaluation version of Metascan is available from the OpSWAT portal.

Image credit: brickizimo-toys.com

Appery to developers: happy xmas, the HTML5 vs. native (war is over), if you want it

bridgwatera | No Comments
| More

Appery is a company known for Appery.io, a 'low-code' platform for cross-device software application development.

html5vsnative-01.jpg

The firm has worked with open source SDK maker Ionic.

The two are claiming to have 'put an end' to the native platform vs. web-centric HTML5/hybrid app debate.

It is true to say that for years, developers had to make a choice when creating enterprise mobile apps -- and for that matter many consumer-level apps:

• OPTION 1: Build a web-facing HTML5/hybrid app that users will need to essentially access though their browser so that it will work across all platforms (iOS, Android, Windows) and therefore, on all device types also, but compromise on user experience.

• OPTION 2: Or be willing to spend the money and time it takes to invest in native apps, one for each platform to get the most dynamic experience.

But we've heard too many 'write once, run anywhere' claims before; surely these firms haven't stumbled upon the real Holy Grail of code?

-- and in a so-called 'low code' format that lowers the skill barrier, enabling a broader base of developers to create mobile apps?

Appery announced the integration of the Ionic SDK to enable developers to build hybrid mobile apps with the same user experience of a native application.

This connection will "obviate the need" to build fully native apps in the enterprise, they say.

Native apps generally provide a more dynamic experience unique to the device and operating system, but are expensive and take more time to build for each platform.

"Appery.io, with its support for Apache Cordova (i.e. PhoneGap), Bootstrap, AngularJS, and jQuery Mobile was already great for building hybrid and responsive web apps quickly and easily," said the company, in a press statement.

Ionic is an open source SDK that supports a broad range of common mobile components, smooth animations and designs.

CEO of Appery Fima Katz claims that Appery.io simplifies integration with cloud services and enterprise systems, combining the simplicity of visual development with the power of JavaScript, to create cross-platform enterprise apps.

CWDN opinion

While 'low code' platforms are often thought to enjoy a lower level of serious interest from the hardcore developer community (yes, even in the face of a popularised visual based coding practices), Appery.io's ability to use Apache Cordova and provide access to more native device capabilities is good news, as is the firm's total combination of software application development tools, backend services (there are plug ins too) and its ability to bring forward template (or you could say 'predefined') integration options with other third party services. Did we mention visual data binding too? No - ok, well that's in there too. The SDK integration here with Ionic may not quite be the one-for-all programming panacea that it is being positioned to be, but Salesforce is a fan and this company is growing in stature. If it can reign in the big claims and keep its code base stable, things could stay interesting.

Image credit: Movista

Linux Foundation 'census' to assess planet's project population & health

bridgwatera | No Comments
| More

cii_initiatives_census_notitle.png
The Linux Foundation's Core Infrastructure Initiative (CII) has launched The Census Project.

Census Project is a new programme to analyse popular open source projects to identify which ones are:

a) critical to Internet infrastructure
b) most in need of additional support
c) most in need of additional funding.

A working example

cii_analysisprogram_v4-03.png

The Heartbleed vulnerability in the open source software (OSS) program OpenSSL had widespread impact and serious ramifications.

It led to the formation of the multi-million dollar Core Infrastructure Initiative backed by The Linux Foundation and industry leaders like Amazon Web Services, Facebook, Google, IBM and Microsoft.

The Census Project expands on the CII's efforts to collaboratively identify and fund critical open source projects in need of assistance.

Project risk score analysis

It automates the collection and analysis of data on different open source projects, ultimately creating a risk score for each project based on the results.

Projects with a higher ranking are especially in need of reinforcements and funding; and, as a result, CII will consider such projects priority candidates for funding. A high score means that the project may not be getting the attention that it deserves and that it merits further investigation.

"Measuring software security is an ongoing struggle that's notoriously difficult given missing or messy data," said Jim Zemlin, executive director at The Linux Foundation.

"There's no perfect set of metrics to guarantee that software is secure or not. The Census Project brings the power of the open source collaboration to help fill this massive gap, which will provide a useful barometer for assessing software from a security point of view. We look forward to feedback on the effort in order to improve the census itself and subsequently the software that we all depend on for our privacy and security," he added.

With full source and data available on GitHub, developers and security experts are invited to participate in The Census Project, from experimenting with different metrics, providing corrected data, proposing new projects to include in the evaluation, and suggesting alternative formulas for combining the data.

Anyone can issue a pull request with suggested changes from the most successful alternatives.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Recent Comments

  • Akmal Chaudhri: I have been tracking NoSQL databases for several years, collecting read more
  • S Ten: Microsoft say that VB6 programming will work on Windows 10... read more
  • Shane Johnson: Couchbase Comment: MongoDB published another benchmark performed by United Software read more
  • Ilya Geller: All programming languages - Java among them - are obsolete. read more
  • Jatheon Technologies: That's very nice! I like equality between man and women read more
  • Gail Smith Gail Smith: You're talking about open source but then promote LogEntries which read more
  • john doe: It seems that the experts disagree that this laptop is read more
  • Cliff Nelson: sounds reasonable.... same concept used for Big Data NOSQL records read more
  • mike kelley: I sticked on my laptops as a person who is read more
  • Hammer Jones: HTML5 is really a fantastic technology. The faster the major read more

Recent Assets

  • roadshow-promo.png
  • 1 Hortonworks DataFlow FINAL.JPEG
  • ApacheTwill_logo.png
  • 1ibImg0.png
  • 1 fuyfuyfuyf.png
  • 1 IBM leidheiwugdf.png
  • 1iugiduwgd.png
  • 889958-nemo_4.jpg
  • Gb4nYWbi_400x400.jpg
  • 1 doggugig.png

-- Advertisement --