What is upstream & downstream software?

bridgwatera | No Comments
| More

It's a simple question, but one that we don't ask out loud enough, perhaps?

What is upstream software?

don__t_cross_the_streams_by_arkangel_wulf-d3l0eus.jpg

This question came up during conversations with Red Hat's Chris Wright, a Linux kernel developer and a principal software engineer with the company.

Of course, in non-tech business speak, upstream tends to refer to production processes that involves searching for (and extracting) raw materials -- in software, this is not the case.

In software application development and programming, upstream refers to source code that has been posted/hosted into/onto the code repository.

Upstream code can be complete code blocks, or patches and/or bug fixes.

Shamefully, we find a good comment on Wikipedia here, "For example, a patch sent upstream is offered to the original authors or maintainers of the software. If accepted, the authors or maintainers will include the patch in their software, either immediately or in a future release. If rejected, the person who submitted the patch will have to maintain his or her own distribution of the author's software."

What is downstream?

Downstream, in contrast, is code that has been deployed - and so, with reference to the above quote, downstream may be diverging away as some forked tributary (Ed - nice, I see what you did there with the river analogy) which needs to be separately considered and maintained.

Augmentations that will be available for all coders should be performed on the upstream code, the water-source (Ed - nice, another one) if you like.

Microsoft open sources Worldwide Telescope (WWT)

bridgwatera | No Comments
| More

Microsoft "hearts" Linux and open source, remember?

8865.WWT-opensource_496x330.jpg-496x0.jpg

Well yes it does, even if the long term goals are commercial -- enough already, it's a business just like the one you work for, so it is allowed to make money unless it reinvents itself as a non-profit overnight, which it doesn't need to.

Can we just look at the Microsoft open source news? Okay, sorry.

The firm has just opened the the code for its Worldwide Telescope (WWT) software and released it to GitHub for full open access.

The code is now open source under the MIT license and has become an independent project as part of the .NET Foundation.

The Worldwide Telescope (WWT) research project launched in 2007 as a collaborative project between Microsoft and various academic institutions including NASA's Caltech.

The project is intended to give us what has been called a, "Unified contextual visualisation of the universe."

The software works to search outer space in the following modes:

  • Earth,
  • Sky,
  • Planets,
  • Panoramas,
  • Solar System

Microsoft explains it's plans for open sourcing the project below:

First, we have placed an initial codebase in GitHub, the Worldwide Telescope Web Client. We view this as an important milestone because it demonstrates our commitment to this effort, and more importantly, it allows the community to begin to explore the code. An important note: This was a bit of a 'trial run', and we are not actively accepting commits just yet, and there is no developer support at this time. This repository contains the HTML5 SDK which is the rendering engine for the web client and the embeddable web control. It also contains the full web client code, which is buildable with the free Community Edition of Visual Studio.

Second, we have continued to work the astronomy community to improve the readiness and capacity to successfully move forward with OpenWWT. We have also continued to add content to this Web site for the community: 'WWT Stories' and documentation on 'Building on the Current Capabilities of WWT'. Finally, we are also in ongoing communications with the American Astronomical Society regarding the leadership role they can play in the future.

What Agile really means in BPM tools: changeable 'living' process apps

bridgwatera | No Comments
| More

BPM company Bonitasoft has proffered forth Bonita BPM 7.

processed-cheese-200310.jpg

This is an 'end-to-end' business process management (BPM)-based application platform for developers to create personalised 'process-based' applications.

The kind of apps that can be adapted to business changes in real time.

Available now, all good chemists

After nine months of development, the open source version of the platform is available for download as of now.

"We've embarked on a new mission to create solutions that empower developers to build applications that support continuous change," claims Bonitasoft CEO and co-founder Miguel Valdes Faura.

Decoupled business logic

Bonitasoft's approach of decoupling of business logic, data and user interfaces is what allows this adaptability and ability to change in real time.

Maintenance and updates to user interfaces can be done independently from business workflow updates and without taking the application offline.

For user interface design, Bonita BPM 7 offers a graphical web-based, drag-and-drop UI designer to create personalised user interfaces.

This designer is extensible, allowing developers to create their own widgets -- and it offers data binding, preview and design across mobile and desktop devices.

Application designers can go far beyond generic portals to create highly customised web portals, pages and forms, says the firm.

"Bonita BPM 7 represents a real convergence between the world of the 'BPM app' and traditional enterprise software," Valdes Faura said.

Native features limit

"It gives developers the freedom to easily code and extend the system whenever they hit the limit with its native features. Plus, the platform is built for change, so developers and business users can apply changes, updates and improvements over time - creating what we call 'living applications' that live and breathe and grow along with your business."

Could this be the real meaning of Agile that we've been looking for?

"The ability to (continuously) extend beyond the limit of native features with extensible widgets and decoupled business logic without taking applications offline..."

11_1.jpg

Red Hat: the Internet of (integrated connected usable hybrid) Things

bridgwatera | No Comments
| More

Red Hat has used its 2015 'Summit' event in Boston to take the wraps off of JBoss Fuse 6.2 and Red Hat JBoss A-MQ 6.2 - with both products introducing new capabilities for developers working on enterprise application and messaging initiatives.

The latest versions of these offerings are designed to enhance developer productivity along three 'critical' planes:

• connectivity,
• usability and,
• Internet of Things (IoT).

The ability to connect applications, data and services spread throughout complex hybrid IT environments has helped many organisations differentiate themselves in the market and gain an edge over their competitors argues Mike Piechvice, Red Hat's president for middleware.

What are these products?

JBoss Fuse -- is a lightweight integration platform that is based on Apache Camel, an implementation of many of the most commonly used enterprise integration patterns (EIP).

JBoss A-MQ -- is a lightweight messaging platform based on Apache ActiveMQ that supports standards such as MQTT and AMQP.

The technology proposition here is a route to make it possible for development teams to reliably connect systems and devices across the Internet and enable the integrated Internet-of-Things.

Hybrid cloud architecture creates new challenges when integrating applications and services across distributed and diverse IT infrastructure, so these products are designed to try and address these difficulties.

CISM0Y9WgAA-cLl.jpg

IMAGE CAPTION: Connected coffee machines, it's an Internet of Thing

According to Red Hat, the "allure" of greater agility, ease of procurement and lower cost are contributing to an explosion of new Software-as-a-Service (SaaS) assets such as business and social applications and the hybrid cloud - where workloads span both private and public clouds - is now a reality for many organisations.

"JBoss Fuse 6.2 and JBoss A-MQ 6.2 deliver the advanced connectivity capabilities needed in these complex environments, offering more than 150 out-of-the box connectors and the ability to connect SaaS applications like Salesforce, Box, DropBox, and Google Drive with on-premise applications. The latest version also provides foundational capabilities for creating RESTful APIs as well as connecting to supplier, partner, or customer APIs. The expanded connectivity and API foundation enables customers to easily create connected solutions for modern business," said the company, in a press statement.

To enable greater developer collaboration and faster time-to-market, JBoss Fuse 6.2 and JBoss A-MQ 6.2 feature enhanced tooling and usability capabilities.

The tooling here is designed to help developers create Enterprise Integration Pattern (EIP)-based services, connect applications and APIs and transform data using the included transformers and a graphical mapper available as a technical preview.

Your middleware is beautiful, Atomic

bridgwatera | No Comments
| More

Infrastructure and middleware is sometimes hard to get excited about.

All the more reason, then, to label it with a sexy name.

1309710079_atomic-the-very-best-of-blondie.jpg

Red Hat hasn't shirked with its latest product release in this vein and has labelled its most recent release the Red Hat Atomic Enterprise Platform.

Not actually a product specifically designed for those firms operating in the atomic energy business, this 'platform' (Ed - it's actually just software isn't it?) is an infrastructure platform for running multi-container-based applications and services.

NOTE: There is an early access programme for those that want to try the product.

Based, of course, on Red Hat's own (RHEL) Enterprise Linux, this software is intended to provide a foundation for 'production-scale' container deployments (Ed - that just means ones that work, we think), utilising the same core enabling technologies as Red Hat OpenShift Enterprise 3, including Docker as a Linux container format, and Kubernetes for container orchestration.

Integrated family

What Red Hat is doing (or at least trying to do) with the Atomic platform is to tell us that it can now offer an integrated family of open source container-enabling platforms, from bare-metal Red Hat Enterprise Linux, to a scale out container orchestration platform, to full Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) solutions.

But, crucially, all these elements will utilise the same core technologies so this will help, in theory, make container-based applications fully portable across a hybrid cloud fabric.

"The true value of Linux containers does not lie with one or two containerised applications deployed into production; rather, it's Linux containers at scales of hundreds or thousands that deliver the promised innovation of flexible, composable applications," said Paul Cormier, president for products and technologies, Red Hat.

"Red Hat Atomic Enterprise Platform provides the enterprise-ready foundation for these scale-out deployments, built on the backbone of the world's leading enterprise Linux platform and backed by Red Hat's certification and support programmes," added Cormier.

Red Hat Atomic Enterprise Platform will offer:

• A managed, scale-out cluster of Red Hat Enterprise Linux 7 or Red Hat Enterprise Linux Atomic Host instances -
• The Docker container runtime and packaging format - to simplify the creation and deployment of Linux containers.
• Container orchestration with Kubernetes - which enables enterprises to deliver applications composed of multiple containers orchestrated automatically across a cluster of hosts.
• Enhanced container security - inherited from Red Hat Enterprise Linux's military-grade security and the inclusion of powerful security tools like SELinux.
• Cluster-wide infrastructure services - including log aggregation and APIs for scaling applications and services.







Red Hat nibbled FeedHenry, now dines out on new mobile app platform... Samsung also eats

bridgwatera | No Comments
| More

First the Earth cooled, then the dinosaurs died out and after that Red Hat got hungry.

The company clearly identified the need for more mobile platform intelligence within its IT stack and so back in Sept 2014 the firm acquired FeedHenry.

FeedHenry provides a cloud-based enterprise mobile application platform to design, develop, deploy and manage applications for mobile devices.

Red Hat feedhenry.JPG

The specialism within FeedHenry is that it 'extends enterprise systems to mobile devices' while incorporating the scalability of cloud technology and, crucially the security of integrating apps with multiple backend systems.

Gastrointestinal integration

After the FeedHenry feeding frenzy, the firm has clearly had some time to ruminate, digest and process (Ed - can we keep this away from the gastrointestinal end of the spectrum please?) the technology it brought in.

This week can see the company introduce the Red Hat Mobile Application Platform, which (obviously) incorporates technology from FeedHenry.

But also... now inside the Red Hat stack, the sum is greater than the constituent parts because the new platform also incorporates intelligence from with products from Red Hat's JBoss Middleware and OpenShift PaaS portfolio.

The platform claims to offer enterprises 'a full stack' for mobile-centric workloads capable of being integrated with existing IT infrastructures.

The word from the VP

VP of mobile platforms at Red Hat is Cathal McGloin. He now claims that Red Hat is "one of the only companies" that can deliver and support the components needed to run the highly scalable workloads required by today's digital business - a big claim, indeed.

"The Red Hat Mobile Application Platform delivers vital mobile capabilities and secure, manageable integration with enterprise systems from a single, trusted provider of enterprise middleware, cloud, and mobile solutions [i.e. us, Red Hat]," said McGloin.

In more specific terms, McGloin explains that the Red Hat Mobile Application Platform is intended to accelerate the development (and integration, deployment and management) of mobile solutions by allowing collaboration across development teams including:

  • front-end application developers,
  • back-end IT integration,
  • and DevOps teams.

Red Hat also announced the availability of a developer offering in its public cloud application development and hosting environment, OpenShift Online.

Helwa! It's Hilwa

"This is an interesting development to see Red Hat rebrand and update its initial FeedHenry platform technology as it now continues to turn it fully into open source and integrate it with OpenShift. A strategic alliance with Samsung will push Red Hat's mobile platform and potentially other software into the enterprise in a more integrated fashion," said Al Hilwa, IDC analyst and program director for software development research.

Full support for the Red Hat Mobile Application Platform in production environments via OpenShift Enterprise is planned for the coming year.

The Red Hat Mobile Application Platform provides mobile capabilities that include security, reusable connections to back-end systems and collaborative/agile app development.

"In extending highly complex and sophisticated applications to mobile devices, Red Hat believes that the defining factors of the next generation of PaaS capabilities will be a rich set of middleware services. In bringing the Mobile Application Platform to OpenShift, Red Hat aims to position itself for this and further advance the Red Hat xPaaS services vision," said the company, in a press statement.

We care a lot

Staying true to its commitment to the spirit of open source, Red Hat also announced plans to help establish an open source upstream project that will carry the FeedHenry name and focus on the development of open source mobile technologies, which will include FeedHenry mobile application platform technology, open sourced by Red Hat.
ountries.

That Samsung reference?

The firms have just formed an alliance to further push mobile.

1-130503193252.jpg

Samsung wallpaper girls from samsung-wallpapers.com -- they really really like Samsung, a lot, no, honest.

"We are excited to collaborate with Red Hat to deliver the next generation of mobile enterprise applications and solutions and are committed to shaping the future of innovation," said Robin Bienfait, executive vice president, chief enterprise innovation officer, Samsung.

Bienfait wants us to know that Samsung "firmly believes" that strategic alliances with organisations such as Red Hat that will help businesses more readily adopt a mobile first environment."

Samsung Business Services and Red Hat plan to deliver:

Business applications: A series of enterprise-ready industry-specific mobile applications that will run on the Red Hat Mobile Application Platform and address key workforce management and business tasks, such as business intelligence, field and customer service, inventory management and sales catalog, pricing, ordering, and invoicing.

A developer ecosystem: Tools and resources to build and nurture a new ecosystem of enterprise partners and developers.

Support services: Integrated support for customers and partners, Enterprise Mobility Management (EMM), and global delivery and support services for the Red Hat Mobile Application Platform.

Business collaboration: Red Hat and Samsung Business Services plan to actively engage in joint go-to-market activities for the solutions developed through the alliance.

Red Hat: women in open source, anything less would be akin to proprietary

bridgwatera | No Comments
| More

This week sees Red Hat host its 11th annual 'Summit' conference, exhibition, symposium, developer hackfest, analyst & press outreach session and all round communications to partners and customers smorgasbord.

Restroom barometer

wios_awards_lead.jpg

Attend any major tech industry event these days and you can always sense the imbalance once you need to use the restroom, bathroom, comfort facilities or toilet (Ed - same thing) i.e. there's a male-female imbalance.

Computer Weekly does a good job of investigating and reporting upon women in IT issues, but there is still much work to be done in many respects.

Anything less would be akin to proprietary

Red Hat thankfully devotes some of its event efforts into a programme dedicated to highlighting the need for women in modern development teams that MUST be structured with collaborative input from 'people' across the full spectrum of sexes, genders, race, religion, creed and hairstyles etc.

This spectrum is only complete with the inclusion of a more equal balance of female programmers, obviously.

... and the winners are

Sarah Sharp, embedded software architect at Intel and Kesha Shah, a student at Dhirubhai Ambani Institute of Information and Communication Technology are as the winners of the first Red Hat Women in Open Source Awards.

NOTE: The Women in Open Source Awards recognise women who make important contributions to open source projects and communities or who promote open source methodologies. The awards recognize open source contributors in two categories: Community and Academic.

Sharp won in the Community category for her tireless efforts in improving communications and women's roles in the open source community. Sharp is a co-coordinator for Outreachy (formerly the Outreach Program for Women), which helps underrepresented groups get involved in open source software projects.

She is also an outspoken proponent of improving communications among Linux kernel developers and making open source communities more civil, collaborative, and welcoming. Sharp was the author and former maintainer of the Linux USB 3.0 host controller driver, and developer of amateur rocket software and hardware for the Portland State Aerospace Society.

Shah, a full-time student, won in the Academic category for her outstanding coding and mentoring work while studying information and communication technology. Being part of Google Summer of Code program multiple times, Shah contributed to three open source organizations, Systers - an Anita Borg Institute, BRL-CAD and STEPcode. She also mentored at Season Of KDE, Learn IT Girls! and Google Code-In, helping pre-university students from across the globe develop their first open source contributions, and is currently director for Women Who Code in Gujarat, India.

Shah was a recipient of the prestigious Google Anita Borg Memorial Asia-Pacific Scholarship, and Anita Borg Pass It On winner for teaching basic computer and smartphone technologies to middle-aged women, especially mothers in her province. Shah has mentored many students on their initial open source development contributions and guided many of them toward becoming regular contributors.

As part of their awards, both Sharp and Shah will each receive a $2,500 stipend and be featured in articles on opensource.com. Sharp also received complimentary registration, flight, and hotel accommodations to attend Red Hat Summit, and will speak at a future Red Hat Women's Leadership Community event.

Ten finalists for the Women in Open Source Awards were chosen by a panel of nine judges. The winners were determined by members of the open source community, who cast their votes over a period of several months. Complete criteria can be found at the Women in Open Source Awards site.

What the winners say

Sarah Sharp, embedded software architect, Intel
"Most of my career has been dedicated to encouraging women to become involved in open source software development and fostering greater collaboration among the open source community as a whole. I'm very honored my peers have chosen to recognize my efforts in this area, and proud to be among the first Women in Open Source Award winners."

Kesha Shah, student, Dhirubhai Ambani Institute of Information and Communication Technology
"As someone who's passionate about open source software - and even more passionate about helping other women break into the field - this recognition means a great deal to me. I'd like to thank Red Hat and my peers for honoring my efforts in open source development, and I look forward to continuing to mentor new open source contributors."

DeLisa Alexander, executive vice president and chief people officer, Red Hat
"We're thrilled to announce Sarah and Kesha as the first recipients of the Women in Open Source Awards. Sarah and Kesha epitomize the passion and talents that women bring to open source communities. Red Hat is proud to recognize their contributions and will continue to do our part to bring more women into open source."

Pulp Friction: SourceForge brings out too much GIMP

bridgwatera | No Comments
| More

Free and open source SourceForge has blotted its copy book.

The web-based source code repository, download mirroring site, collaboration hub and bug tracking service has been giving users more than they would normally have expected.

Pulp_Fiction_cover.jpg

The site is reported to have been 'inserting' advertisements and other forms of third-party offers into downloads for projects that are no longer currently actively maintained.

While some would argue that this is fairly inoffensive and comparatively legitimate monetisation of what is still essentially free software, the community has not been happy with the process.

GIMP debacle?

As reported on PC World, SourceForge archived popular photo editing tool called GIMP-Win on one of its mirror sites, since its author didn't want to use SourceForge for distribution anymore.

"It was then wrapped with third-party ads, and SourceForge was accused of hijacking GIMP-Win," writes Jeremy Kirk.

According to an official SourceForge statement, "In an effort to address a number of concerns we have been hearing from the media and community at large, we at SourceForge would like to note that we have stopped presenting third party offers for unmaintained SourceForge projects."

Easy-to-decline?

SourceForge claims that it has recently been testing presenting what it describes as "easy-to-decline third party offers" with a very small number of unmaintained SourceForge projects.

"We discontinued this practice promptly based on negative community feedback. At this time, we present third party offers only with a few projects where it is explicitly approved by the project developer, or if the project is already bundling third party offers."

As wider reaction to this story, SourceForge is said to be generally losing ground to GitHub and other sites that exist to perform code repository and download functions such as FossHub.

Computer Weekly has previously reported news of SourceForge taking down an Ubuntu Linux OS project purportedly affiliated with online hactivist group Anonymous, after a review by security experts.

Free image above: Wikimedia Commons

Come together: spatting Node.js forks unify in 'neutral' Linux Foundation forum

bridgwatera | No Comments
| More

The Node.js and io.js developer communities will now collaborate to merge their respective code bases.

images.jpg

This open source union means the two communities will now continue their work in a neutral forum, the Node.js Foundation, hosted by The Linux Foundation.

Node.js is a platform built on Chrome's JavaScript runtime for building network applications - it is very popular in the areas of robotics, embedded applications and (hence, logically) the Internet of Things.

Node.js uses an event-driven non-blocking I/O model that makes it lightweight for data-intensive real-time applications that run across distributed devices.

Further then, io.js is an npm compatible platform originally based on Node.js that resulted from a relationship 'spat' which saw io.js be created as a fork.

npm is the package manager for Node.js -- it was created as an open source project to help JavaScript developers share packaged modules of code.

Founding and new members include Platinum members Famous, IBM, Intel, Joyent, Microsoft and PayPal.

Gold members include NodeSource and Progress Software, and Silver members include Codefresh, DigitalOcean, Fidelity, Groupon, nearForm, npm, Sauce Labs, SAP, StrongLoop and YLD!.

Node.js is the "runtime of choice" (says the team) for high-performance, low latency applications, powering everything from enterprise applications to robots to API engines to cloud stacks, IoT, and mobile websites.

"The Node.js Foundation provides a neutral structure to balance the needs of all constituents in the community: the users, vendors and contributors. As projects grow to the size of Node.js, they benefit from the neutrality, open governance and community that only a foundation can provide," said Jim Zemlin, executive director at The Linux Foundation.

The Node.js Foundation today is also announcing its ratified open governance structure. The open, technical governance model that will guide the project was determined with input from the public, including contributors from both the Node.js and io.js communities.

"An independent Node.js Foundation built on open governance is a major industry wide event as it ensures the continued adoption and growth of one of the world's most ubiquitous programming languages. The Node.js foundation will provide developers with a top development platform that when combined with the power of IBM cloud and mobile will accelerate time to application concept, deployment, and refinement," said Angel Diaz, vice president, cloud architecture & technology, IBM.

Image credit: 45cat

Big data headache relief for web-scale 'big' in-memory apps

bridgwatera | No Comments
| More

A name you may not know, Hazelcast, has announced the general availability of Hazelcast 3.5 this week.

haz03.png

The company is a provider of operational in-memory computing.

What is operational in-memory computing?

The 'operational' part just means that it is working - but the 'in-memory computing' part refers to a combination of hardware and (middleware) software working to allow us to store data in RAM (often shared across a cluster of computers) and then process it in a parallel distributed process manner.

Near cache is nice cache

In this regard then, Hazelcast's High-Density Memory Store is now available to the Hazelcast 'client' (software) as a 'near cache' providing access to hundreds of gigabytes of in-memory data on a single application server.

All that data available in local application memory means an instant, massive increase in data access speed on fewer application server instances to power the same total throughput.

The firm insists that this 'vastly' increases application performance while reducing hardware footprint and management complexity.

Big apps, big headaches

"Big applications often mean big headaches for operations teams. The 3.5 release introduces a host of new tools and features to make running an operational in-memory computing platform more manageable," said the company, in a press statement.

New push-button deployment options make short work (says Hazelcast) of provisioning a new cluster, reducing it to an easily reproducible process that completes in minutes.

Predictable latency and expanded monitoring capabilities make Hazelcast more stable to visualise out-of-tolerance events that may require action.

Cutting through Hazelcast CEO Greg Luck's arguably somewhat self-serving "we're the best and we're compelling" commentary, the company chief does also say that his firm offers a "proven business case" for companies who are looking to upgrade to in-memory for breakthrough application speed and scale.

Hazelcast 3.5 open source is available today.







JavaScript.com domain sold to create new JavaScript learning resource

bridgwatera | No Comments
| More

Code School is not a school, not as such.

d32d3f3rf3rr2gty53.JPG

It is a Pluralsight company and online learning website for both aspiring (and existing) software developers.

This month then... Code School has announced JavaScript.com as a new online learning resource created for the JavaScript community.

JavaScript is an object-oriented programming language with prototypal inheritance used to make web pages interactive.

Founder and CEO of Code School Gregg Pollack insists that JavaScript has become one of the most important coding languages and pervades nearly everything in the tech world.

"But because tech moves fast, developers need a way to stay up to date," he said.

JavaScript.com is a free community resource, geared toward helping aspiring programmers begin learning the language, and helping existing developers stay up to speed with the latest news, frameworks, and libraries.

The site features an introductory JavaScript course, resources and a news feed where stories and articles can be submitted by users.

"When the option to purchase the JavaScript domain came up, we were really excited at the potential of creating a home for JavaScript and its users," Pollack said.

He promises that what has been built so far is just the first iteration of what's to come.

"We're ready to listen to the community and see what additional resources we can provide," he said.

NOTE: Developer-aware analyst house RedMonk's most recent Programming Language Rankings listed JavaScript as the most popular coding language for programmers, noting that JavaScript has shown sustained growth and traction' in the coding community.

32r23r45y7uijyhgt45t4ytrge.JPG

DreamFactory REST API services now on Docker

bridgwatera | No Comments
| More

DreamFactory Software says its open source REST API services platform is now available as a Docker container.

marchitecture-5.png

The concept here is... easier install and management of custom deployments of DreamFactory across a wide range of platforms and environments.

The firm reminds us that Docker containers are a simple and efficient way to install DreamFactory: they are inherently cross-platform and can be run on any machine that can run Docker.

They also provide a streamlined deployment mechanism since they require inclusion of only the components necessary for the DreamFactory application.

"Docker has emerged as an extremely popular deployment model for many modern web apps, and has particular significance for the rapid development of mobile apps in production environments," said Bill Appleton, co-founder and CEO of DreamFactory.

"We fully support this modular approach as it enables developers to quickly scale the DreamFactory platform without the need to set up a complete virtual machine. Our Docker version makes it even easier to simplify and accelerate the deployment of mobile apps that leverage backend enterprise services."

Pentaho: a Blueprint™ use case for big data

bridgwatera | No Comments
| More

Don't just say Pentaho -- now say Pentaho, a Hitachi Data Systems company.

The newly divisionalised (Ed -- is that even a word?) firm appears to be solidly hanging on to its own brand name under Hitachi, which will please those who have come to regard its open source credentials as among the more truly open from the firm's inception.

Pentaho (version 5.4) this month talks up its support for (and integration with) Amazon Elastic MapReduce (EMR) and SAP HANA.

Branded Blueprint

2e2d2wf23f2.JPG

The firm has branded its 'Pentaho Big Data Blueprint use case designs' as a means of guiding customers with its technology at this level.

In Pentaho 5.4 customers can now use Amazon EMR to natively transform and orchestrate data as well as design and run Hadoop MapReduce in-cluster on EMR.

The technology play here is all about giving data developer shops ways to 'operationalise' a cloud-based data refinery architecture (more blueprints) for on-demand (obviously, it's cloud) governed (that's always important) delivery of data sets.

Users here can also plug into SAP HANA's capabilities on a wider variety of data -- the firm says that Pentaho 5.4's integration with SAP HANA enables governed data delivery across multiple structured and unstructured sources.

Scaling Hadoop

Enterprises running Hadoop find that data variety and volumes increase over time, making reliable performance and scalability mission-critical priorities.

Pentaho recently executed a controlled study that demonstrates sustained processing performance of Pentaho MapReduce running at scale on a 129-node Hadoop cluster. The results build on the value of the Pentaho platform, delivering high performance processing at enterprise scale in big data deployments.

"We continue to deliver on our vision to help organizations get value out of any data in any environment with Pentaho 5.4," said Christopher Dziekan, chief product officer, Pentaho. "Our open and adaptable approach means customers choose the best technology for their businesses today without the worry of being locked-out in the future."

Esperanto openness in Red Hat Software Collections 2

bridgwatera | No Comments
| More

With just a couple of weeks to go before Red Hat Summit in Boston, the firm is clearly not holding back on news to save it for the event itself.

This week sees the firm come forward with new web development tools, extensions to dynamic languages and databases -- and support for multiple language versions.

Flag_of_Esperanto.svg.png

What Red Hat wants you to know

The central message Red Hat is trying to get across here is -- hey, we provide stable (open) tools for both traditional and container-based software application development.

The latest updates come packaged in Red Hat Software Collections 2.

The release of open products is delivered on a separate lifecycle from Red Hat Enterprise Linux itself.

This more frequent release cadence for Red Hat Software Collections 2 is hoped to expedite the creation of production-ready applications... including those built with Linux container deployments in mind.

Support for multiple language versions

With significant differences between seemingly minor point releases of open source languages, developers can now select and standardise development on a specific language version that best meets their current needs while remaining confident that it is backed by Red Hat support.

For example, Red Hat Software Collections 2 includes updates to Python 2.7, continues to support Python 3.3 and also adds Python 3.4 - providing a fully-supported language library and blending developer agility with production stability.

As the benefits of Linux containers based on the Docker format continue to take centre stage in the developer world, Red Hat Software Collections 2 continues Red Hat's commitment to bridging the Linux container lifecycle with enterprise requirements, from development to deployment to maintenance --- says the firm.

This new release makes Dockerfiles available for many of the most popular collections, including Perl, PHP, Python and Ruby.

"As developer requirements grow across the application ecosystem, especially with the rise of cloud-native and composable applications, simply having access to the latest tools is not enough," said Jim Totton, vice president and general manager for the platforms business unit at Red Hat.

"These tools must also be supported so that the resulting applications can be deployed into production with confidence; Red Hat Software Collections provides this confidence through our enterprise-grade support while still enabling developers to pick and choose the tools that best fit their respective projects."

Stream processing, for dummies

bridgwatera | No Comments
| More

DataTorrent will be making it RTS core engine available under the Apache 2.0 open source license.

The firm is a player in the real-time big data analytics market.

It is also the creator of a unified 'stream and batch processing' platform.

datatorrent-banner1.png

What is stream processing?

Stream processing is the in-memory, record-by-record analysis of data in motion.

Typical examples of streaming data include data from transactions, mobile devices, web clicks and event sensors.

Stream processing promises to reduce processing time and the time to take business action based on that insight.

Users can use the results of streaming analysis to created real time dashboards to detect critical business situations and take action.

Stream i.e. lots of flow - oh, they called the company DataTorrent, I get it.

I have adoption 'issues' though!

So I'd love to use stream processing (now that I know what it is), but I have adoption issues based upon my personal hang-ups around GUI application assembly.

Gosh, that's tough - but hey, DataTorrent RTS 3 offers no-coding required GUI application assembly self-service real-time & historical data visualisation and a simple data ingestion & distribution application for Hadoop.

Phew - that was lucky.

Released as Project Apex, the open-source DataTorrent RTS core engine forms the foundation of DataTorrent RTS 3, now available in three editions - Community Edition, Standard Edition and Enterprise Edition

"Big data projects are often delayed or remain stuck in the proof-of-concept phase as Hadoop can be unfamiliar and difficult to use for many enterprises", said Nik Rouda, senior analyst, ESG, "DataTorrent RTS 3 addressed common challenges with graphical tools for developers, operations teams, data scientists and business users."

The Community Edition is designed to enable developers and innovation groups in enterprises to quickly prove out their big data streaming and batch use cases and establish a business case for an enterprise grade big data project.







MongoDB: the modern application's polymorphic data journey

bridgwatera | No Comments
| More

Question: What is a technology conference without an appearance from Ray Wang of Constellation Research? R Ray Wang (王瑞光) @rwang0

Answer: Not much, he seems to be at every one of them of note.

But before Mr Wang and the 'fireside chat' experience at MongoDB World 2015, let's go to the keynote.

CGe-QTZXIAAqD-r.jpg

CTO and co-founder of MongoDB Eliot Horowitz reminded us that with MongoDB 3.0 we saw "document level concurrency" introduced -- this technology has now been enriched through the acquisition of the Wired Tiger storage engine.

NOTE: Wired Tiger will now be default in MongoDB 3.2 going forward.

Hang on, did you say document level concurrency?

Yes indeed, MongoDB allows multiple clients to read and write the same data and in order to ensure consistency, it uses locking and other concurrency control measures to prevent multiple clients from modifying the same piece of data simultaneously.

"Together, these mechanisms guarantee that all writes to a single document occur either in full or not at all and that clients never see an inconsistent view of the data," said the firm.

The news on day two of this show hinges around the following statement:

For the First Time, Modern Application Data from MongoDB Can be Easily Analysed with Industry-standard Business Intelligence and Visual Analytics

What does that mean?

The company announced a new connector for BI and visualization, that connects MongoDB to industry-standard business intelligence (BI) and data visualization tools.

Designed to work with every SQL-compliant data analysis tool on the market, including Tableau, SAP Business Objects, Qlik and IBM Cognos Business Intelligence, the connector is currently in preview release and expected to become generally available in the fourth quarter of 2015.

Users will be able to analyse the new data being managed in MongoDB for what the company calls their 'modern applications', along with the traditional data in their SQL databases and spreadsheets using BI and visualisation tools deployed on millions of enterprise devices.

NOTE: Previously, organisations had to move data from MongoDB into relational databases for analysis and visualization, resulting in added time to insight, cost and complexity.

With the emergence of new data sources such as social media, mobile applications and sensor-equipped "Internet of Things" networks, organisations can extend BI to deliver real-time insight and discovery into such areas as operational performance, customer satisfaction and competitor behaviour.

"For many years now, MongoDB has provided extensive aggregation and analytical capabilities native to the database. However, users have been unable to capitalise on the rich ecosystem of SQL-based data analysis and visualization tools. With this new connector, MongoDB opens up a huge new realm of possibilities for everyone from executives to business analysts to data scientists to line of business staff," said Horowitz, Co-Founder and CTO of MongoDB.

"This is a lightweight connector with heavyweight capabilities. The possibilities are endless," he added.

Initially, MongoDB has been working with joint customers of Tableau Software to define the critical feature set and to validate the performance and scalability of the integration.

Schema good, polymorphic schema better

Horowitz reminded us that we often think about MongoDB as a schema-less database, but this isn't really true i.e. it does have schemas, all databases have to have schemas, it's just that MongoDB has dynamic schemas that have the ability to change and morph - and this is why we call them polymorphic schemas.

Remember... use of NoSQL dynamic polymorphic schemas means that dissimilar data sets can be stored together and this is good for the new world of unstructured big data.

It's like there's this whole modern 'polymorphic application data' journey that we are traveling on... we need to deconstruct this statement and analyse it further.

That's number-Ray Wang

Back to where we started and our fireside chat hosted by Ray Wang who had to juggle with the presence of the highly entertaining Philip Mudd.

Mudd is a published author and former deputy director of the CIA's counterterrorist center and FBI National Security Branch. His comments on the way his team was using the kinds of data analysis we are talking about here in relation to MongoDB left the audience well entertained and somewhat stunned.

He covered a good deal of information on the NSA... and how Al Queda had worked with its funding operation.

The real secret to big data?

"We don't need to summarise all the crap we are collecting, we need to summarise what we can do with this crap so that we start with the decisions and work our way backwards," said Mudd.

"If you are a great analyst you realise that 90% of what you know if of no use because it doesn't lead to a point of 'decision advantage' now," added Mudd.

Insight is context plus decision plus predictability said Wang... there's a whole additional story to tell here.


Yahoo! cloud benchmark 'scalability leadership' for MongoDB

bridgwatera | 1 Comment
| More

The Yahoo! Cloud Serving Benchmark (YCSB) has 'determined' (by its own measure) that MongoDB provides greater scalability than Cassandra and Couchbase in all tests, by as much as 13x.

This suggestion, or determination if you give this benchmark full credence, was provided by independent benchmarking and performance testing organisation United Software Associates.

The firm now suggests that MongoDB provides "more predictable scaling" than what has been somewhat disparagingly called 'niche NoSQL alternatives' including those named above here.

No need for modesty?

The report proposes that MongoDB "overwhelmingly outperforms" Cassandra and Couchbase in the following kinds of deployments:

• where the data size exceeds that of RAM,
• where data is partitioned across multiple servers and,
• where data is replicated for high availability.

"While performance is important, it must be considered along with many different criteria when evaluating database technology," warns Sam Bhat, CEO of United Software Associates.

"The goal of this report is to take a closer look at scalability, another critical factor used to determine the right database technology for a project. MongoDB proved to have the best and most predictable scalability, better than either of the niche NoSQL products," he said.

The following results are based on evaluation of two workloads using YCSB: Workload A, an equal mix of reads and updates, and Workload B, which consists of 95% reads and 5% updates.

All tests were performed with 400M records distributed across three servers, which represents a data set larger than RAM. Each test performs 100M operations and records throughput and latencies at the 95th and 99th percentiles for reads and updates separately.

According to a press statement focused on this news, "In the 50/50 workload, MongoDB provides over 1.8x greater throughput than Cassandra, and nearly 13x greater throughput than Couchbase."

Don't just take our word for it

"We encourage readers to use our findings as one of many inputs to their own evaluations and to thoroughly test a range of database functionality that fits their application," said Bhat.

NOTE: The latest versions of each product were used: Couchbase 3.0.2, Cassandra 2.12 and MongoDB 3.0.1 with WiredTiger. In all tests, the best practices for each product were followed and multiple rounds of tests were run to determine the optimal number of threads and best performance for each product.

1 Mongoqwpdjoqowihf.png

ObjectRocket by Rackspace: this one's dedicated to you, MongoDB

bridgwatera | No Comments
| More

This week sees MongoDB World 2015 (#mongodbworld ) held in New York.

In case you need reminding, MongoDB is an open source NoSQL cross-platform document-oriented database company favouring dynamic schemas.

Also in attendance and only second to Teradata in its sponsorship of this event is Rackspace.

In case you need reminding, Rackspace is a managed cloud provider with an arguably good pedigree in open source technologies (it was a founder of the OpenStack cloud operating system) -- the firm has differentiated its market positioning and technology proposition to hinge around its cloud service excellence.

So what happened in New York?

Rackspace announced a dedicated option of ObjectRocket for MongoDB databases.

In case you need reminding, ObjectRocket is a MongoDB database as a service (DBaaS) provider that was acquired by Rackspace in 2013 - with ObjectRocket's open source-based MongoDB solution, Rackspace has broadened its OpenStack-based open cloud platform to offer users a NoSQL DBaaS.

objectrocket.png

The proposition here is a solution that combines the speed and scalability of the ObjectRocket purpose-built platform with fully dedicated hardware and networking for compliance-dependent use cases.

Who cares?

Actually users would probably care about this i.e. this combo could help data-centric development shops with the performance, security and compliance requirements of high-risk industries such as healthcare, financial services and insurance.

I still don't get why this is special?

The justification here is supposed to hinge around technology that is built on a custom-designed dedicated architecture, that's part of it.

The other part of it is that ObjectRocket for MongoDB can potentially power more demanding database workloads by providing full utilisation of resources for host machines.

Isolation, in a nice way

There's a bit more in fact, it also provides a level of isolation that data-driven developers will want.

Co-founder of ObjectRocket Chris Lalonde insists that 'dedicated ObjectRocket' here lets applications scale to millions of MongoDB operations per second while providing higher levels of security.

Why?

Because it gives users a fully managed, dedicated rack of MongoDB.

"This offering brings together all the features customers have come to expect from ObjectRocket such as a fully isolated environment, encryption at rest and dedicated networking components. On top of all of that, we have a team of experts for MongoDB staffed 24x7, so customers can focus on building their business," said Lalonde.

Features of ObjectRocket for MongoDB now include a suite of developer & automation tools to speed up time-to-market and helps reduce downtime with both backend (automated deploying) and customer facing tools, making it easier to operate.

There's also data portability - it uses the universality of open source, community edition, MongoDB so customer data is portable.

Application portability

This technology runs applications anywhere, including in Rackspace or AWS data centers via AWS Direct Connect with integrated Access Control List (ACL) sync.

Rackspace says it is committed to supporting fully managed versions of the most popular NoSQL, big data and relational databases including MongoDB, Redis, Hadoop, Apache Spark, Oracle Database, Microsoft SQL Server, MySQL, Percona, and Maria DB.

Open source + big data = Apache: Big Data

bridgwatera | No Comments
| More

The Linux Foundation in conjunction The Apache Software Foundation has announced Apache: Big Data, an inaugural conference to take place on September 28-30 in Budapest, Hungary.

1 apgdwiudgiw.png

The event will take the place of ApacheCon Europe.

It will be followed immediately by another new event, ApacheCon: Core, which will bring together developers and code committers to host community-driven sessions focused on the range of non-Big Data ASF projects.

Sources at the Foundation(s) plural insist that "virtually all" of the leading big data projects including Bigtop, Crunch, Falcon, Flink, Hadoop, Kafka, Parquet, Phoenix, Samza, Spark, Storm, Tajo and others are developed under the auspices of The Apache Software Foundation (ASF).

Collaboration is (of course) essential to success in open source, which means developers need a neutral venue in which to come together to advance their work.

This is why sessions at the co-located ApacheCon: Core will be project-driven, with developers and code committers able to organise sessions around their specific projects, enabling a larger variety of projects to be represented than any prior Apache event.

Both events are currently accepting speaking proposals at the time of writing.

Contain(er-ize) yourself, it's Intel Clear Linux

bridgwatera | No Comments
| More

The Clear Linux Project for Intel Architecture is a new distribution of Linux (not the normal just another Linux distro kind, keep reading) that has been built specifically for 'various' cloud use cases.

The distro is container-centric, like Red Hat Atomic Host.

NOTE: Red Hat Enterprise Linux 7 Atomic Host, an operating system optimised for running the next generation of applications with Linux containers

clearlinux_b3.png

Intel brings in its container centricity in here (Ed - not an actual expression, surely?) with its Clear Containers technology and says it is working on a way to improve security of containers by using the VT-x technology of Intel processors.

Intel: lean-and-fast is better than big-and-universal

Intel explains that its aim was NOT to make 'yet another general-purpose Linux distribution'; and so, as such, while it has included 'many' software components from the OpenStack Foundation, but it chose (among other decisions to cull) not to include a GUI or printing support.

Intel's strategic aim here appears to be creating a place where we can use BOTH the advantages of containers and the advantages that come from the kernel-native features in Linux together, in union.

Intel open source software engineer Arjan van de Ven has said that his firm wanted to create a technology for containers that uses the isolation of virtual-machine technology along with the deployment benefits of containers.

"As part of this, we let go of the "machine" notion traditionally associated with virtual machines; we're not going to pretend to be a standard PC that is compatible with just about any OS on the planet," wrote van de Ven on LWN.net, the website formerly known as Linux Weekly News.

Also in this zone of technology is CoreOS.

NOTE: CoreOS uses Linux containers to manage services at a higher level of abstraction than normally seen -- a single service's code and all dependencies are packaged within a container that can be run on one or many CoreOS machines.

Intel underlines the goal of Clear Linux OS as an initiative to 'showcase the best of Intel Architecture technology' from low level kernel features to more complex items that span across the entire operating system stack.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Recent Comments

  • Shane Johnson: Couchbase Comment: MongoDB published another benchmark performed by United Software read more
  • Ilya Geller: All programming languages - Java among them - are obsolete. read more
  • Jatheon Technologies: That's very nice! I like equality between man and women read more
  • Gail Smith Gail Smith: You're talking about open source but then promote LogEntries which read more
  • john doe: It seems that the experts disagree that this laptop is read more
  • Cliff Nelson: sounds reasonable.... same concept used for Big Data NOSQL records read more
  • mike kelley: I sticked on my laptops as a person who is read more
  • Hammer Jones: HTML5 is really a fantastic technology. The faster the major read more
  • S Ten: Why do Microsoft still refuse to open source the VB6 read more
  • J Shree: WFT Cloud are extremely sophisticated in terms of their service read more

Recent Assets

  • don__t_cross_the_streams_by_arkangel_wulf-d3l0eus.jpg
  • Adrian & Terry-Lynn in Sardegna this last weekend.png
  • 8865.WWT-opensource_496x330.jpg-496x0.jpg
  • processed-cheese-200310.jpg
  • 11_1.jpg
  • CISM0Y9WgAA-cLl.jpg
  • 1309710079_atomic-the-very-best-of-blondie.jpg
  • 1-130503193252.jpg
  • Red Hat feedhenry.JPG
  • wios_awards_lead.jpg

-- Advertisement --