Intel unifies with Unity to extend Android support

bridgwatera | No Comments
| More

1Unity_Pri-200px.png
Intel has announced a strategic collaboration (Ed - is there any other kind?) with Unity Technologies.

INDUSTRY NOTE: Unity is a high-performance end-to-end development platform for creating interactive 3D and 2D experiences.

The firms will come together now with the aim of advancing the development of Android-based applications on Intel architecture.

The agreement is engineered to help Intel's mobility push (say the firms) as millions of developers using the Unity development platform can now bring native Android games and other apps to Intel-based mobile devices.

PRODUCT NOTE: Unity adds support for Android across all of Intel's current and future processors including both the Intel Core and Intel Atom processor families.

Unity will ensure Intel product enhancements, including both graphics and CPU performance improvements and features, will be integrated into future releases of the Unity 4 and Unity 5 product lines.

DEVELOPER NOTE: In addition, developers using Unity can now add support for Intel architecture in their applications or produce native applications for Intel architecture only with minimal extra effort.

insidebrackets.jpg

"We've set a goal to ship 40 million Intel-based tablets this year and expect more than 100 Android tablet designs on Intel in the market by the end of this year," said Doug Fisher, Intel corporate vice president and general manager of the software and services group.

"Our collaboration with Unity will give its nearly three million developers the necessary software tools and support to build amazing Android experiences on Intel architecture," added Fisher.

"Unity is used by half of all mobile game developers, and many of them have been asking for increased support for Intel-based devices running Android," said David Helgason, CEO, Unity Technologies. "We are proud to be working with Intel to ensure that Unity provides the smoothest and highest performing experience possible on Intel platforms."

Mini case study
sonic-dash-150px.jpg
Intel and Unity have given a few software developers access to a very early version of the Unity code base that supports access to Intel graphics and CPU technology. Early indications are that this announcement will cause tremendous excitement in the game developer community. SEGA is one of the companies jumping on this opportunity quickly, having already added x86 support to its Unity-based Sonic Dash* title.

Chris Southall, Studio Head of Hardlight has stated, "SEGA's Hardlight is one of the very first mobile studios to utilize the x86-enabled version of Unity in one of its games. We've seen impressive performance gains by 'going native' - it's been great working with Unity and Intel on this."

In-the-field debugger, when offsite is a bugger

bridgwatera | No Comments
| More

Undo Software has released UndoDB Out-and-About.

637px-Bug_blank.svg.png

Described as a "new way of licensing" its reversible debugging software, this product is aimed at vendors whose software is deployed on:

• customer sites,
• in the field,
• on a customer machine,
• in real life (OK, we get it -- Ed)

The software is used by engineers to investigate, find and (hopefully) consequently fix customer-critical bugs.

UndoDB Out-and-About helps track down software failures, such as intermittent issues and memory corruption errors.

Offsite is a bugger

It is particularly useful (says the firm) when customers do not want to send their highly confidential, mission-critical data off-site to the vendor for them to reproduce the bug -- oh ok, we get the "in the field" repetition now.

UndoDB Out-and-About provides exactly the same functionality as Undo Software's flagship product, UndoDB, but is licensed for use on a machine not owned by the licensee (OK we get it, not offsite -- Ed)

Available on Linux and Android, UndoDB allows developers to record their program's execution and then rewind and replay their C/C++ code in real-time to find bugs more quickly, increasing productivity and helping to meet development deadlines.

CEO insight, always nice

"Software vendors operate in a complex development ecosystem, and it is imperative that they work closely with their customers to ensure software is meeting the highest quality standards in the field," said Greg Law, CEO and co-founder, Undo Software.

"By extending the power of reversible debugging to customer sites with UndoDB Out-and-About, vendors can find and fix customer-critical bugs faster, protecting client relationships while increasing productivity."

Did we mention this software can be used in-the-field? Just checking.

Hadoop is only a 'landing zone', THEN you start big data analytics

bridgwatera | No Comments
| More

retail_dashboard.png
Pentaho thinks its new Streamlined Data Refinery solution architecture (optimised for the HP Vertica Analytics Platform) could provide a better (more "refined & sophisticated" route to big data analytics.

This software is designed to help create analytic datasets within Hadoop and immediately push that data to HP Vertica.

The firms' vision is one of analytics in Hadoop where users can:

• blend,
• explore,
• analyse and,
• visualise all of their data.

Landing zone

Users are saying that while Hadoop is a "great landing zone" for big data, the real analysis of granular data needs an additional custom-designed extra.

"We're using Pentaho and HP Vertica to quickly slice and dice terabytes of data and millions of daily records, plus record click stream data and pixel logs," said Jaiesh Khavani, senior manager, BI & data warehousing at Santa Monica-based e-commerce fashion company Beachmint.
"While Hadoop is a great landing zone for our big data requirements, to truly engineer data sets for predictive analytics, we need purpose-built platforms like HP Vertica and Pentaho to provide the data integration, reporting, and visualization capabilities to drive meaningful insights," added Khavani.
Through this collaboration, analytics-ready datasets are blended using HP Vertica for analytics against minimally modeled data.


According to Pentaho's Christopher Dziekan, as one of the four big data blueprints built by his firm, the Streamlined Data Refinery for HP Vertica offers data developers in-cluster data integration capabilities to power scalable and interactive analytics for end users.


Could fundamental open cloud freedom die?

bridgwatera | No Comments
| More

1 open sceu.png

The Open-Xchange Summit will be held in late September in the Germany city of Munich.

Its organisers describe the gathering as a four-day event for technologists committed to developing open standards for the formation and continuation of a trusted cloud ecosystem.

The event includes a guest keynote address entitled 'Free is a Lie' from Aral Balkan, a designer & social entrepreneur.

Fundamental freedoms

Balkan claims he is working to create independent technologies that protect our fundamental freedoms & democracy.

Trust in the cloud forms the cornerstone of the Summit agenda with topics covered including:

• the surveillance state,
• the encryption economy,
• honest business models and,
• keeping trust amongst customers.

"There is a very real demand for secure cloud services and service providers are moving to cloud packages that offer their customers that extra peace of mind," said Rafael Laguna, CEO of Open-Xchange.

"Open source protocols and tools can ensure transparency as well as offering end users choice and full flexibility. At OX Summit we aim to support those providers as well as spur on the debate around the future of a free and open internet".

Ya ya ya, das Muffatwerk ist gut ya?

The conference takes place at the Muffatwerk, a former hydroelectric power station built on the banks of the Isar river in the heart of Munich.

The Summit itself, scheduled for Thursday 18th, will be officially opened with a keynote address entitled 'Switch to Open: Why an Honest Business Model is so Important' from Rafael Laguna, CEO Open-Xchange.

Compuware & the 'horizontal' Agile LoB DevOps dream

bridgwatera | No Comments
| More

Compuware has been vocal recently on the subject of open source and its impact upon DevOps -- (the confluence of both software application development and 'operations').

1There's_Something_Magical_About_Her_(7124551237).jpg

Creepy open source

The firm suggests that open source can often creep in to organisations without central IT knowing i.e. as side projects outside of its purview.

Often driven by line of business (many big data projects are often actually spearheaded by LoB) -- and this means that where many different projects are running, there is a lack of visibility and standardisation is almost impossible.

QED: many inefficiencies can appear.

Compuware believes there is a trend towards breaking the vertical functional silos (Dev, Ops, LoB, ...) within organisation, as most (large) organisations are having to re-structure with 'horizontal' teams in order to work on a given business objective with stakeholders from each of the old silos.

In a way, this is 'Agile' for the enterprise, including Dev and Ops around a LoB value.

Unacceptable for applications to fail

"It is unacceptable for applications to fail now, they are customer-facing and people need to be able to use them to be productive. This means there have been a lot of additional reporting requirements put in and quality gates that developers need to pass through -- which is good in terms of improving code and reducing the amount of fixes that have to be done in the field, but does slow things down and means developers are actually spending just a fifth of their time actually coding -- this needs to be changed so that developer time is more productive," said Wolfgang Gottesheim, technology strategist for Compuware.

The challenge now (suggests Compuware) is integration in the face of complexity i.e. developers need to be able to ensure that the code they are writing is optimised across the application delivery chain, not just for their environment at hand.

For example, if we have a distributed application that needs to call on the mainframe to complete transactions, we need to ensure that that application is written in a way that does not use up extra MIPS.

The firm insists that developers need end-to-end visibility so that they can understand how change and updates will impact across the application environment.

Is the 'horizontal' Agile LoB DevOps dream real yet? This debate is not over.







Android Lapse It photo app: puts pro cameramen out of a job?

bridgwatera | No Comments
| More

The rise in prominence of LibreOffice with what is now claimed to be 80 million active users is fuelling discussion around those Free & Open Source Software (FOSS) applications that do as good a job (if not better) as their proprietary counterparts.

1 lapse.png

We don't usually review apps as such on the Computer Weekly Open Source Insider blog, but Lapse It for Android (it's native for iOS too) is worth calling out.

Available on the Google Play store, Lapse It is a free time lapse photography app for your smartphone or tablet.

There is a paid for "Pro" version too with more features and less ads -- and a higher capture resolution up to 1080p.

In the lite version, the resolution is limited to 240p.

The app is promoted with the following blurb, "Events such as movement of clouds, the rising and setting of the sun, a party with your friends, even individual activities or anything else that you can imagine. You will see them in a new way and otherwise undetectable patterns will emerge."

You can upload directly to YouTube, Facebook and others social/video sites.

Lapse It has a native C++ render engine and a fully featured range of settings to apply.

The app works rendering the videos in the Android proper format, so you can watch the videos on your phone, share it across the web and with other devices.

Why dairy fresh openSUSE Linux is even creamier

bridgwatera | No Comments
| More

The openSUSE Project is taking the development version of openSUSE (known to family and friends as Factory) to distribution using the "rolling release" development model.

fresh-family.png

Huh?

What is rolling release?

A rolling release is a continually developing software system where software is recurrently updated to be (what openSUSE hopes can stand as) a tested and stable "daily (dairy) fresh" bleeding-edge distribution.

As Matt Hartley explained on Datamation earlier in 2014, with any Linux distribution you have the ability to roll back a problematic update should something go horribly wrong.

"How easy or complex this is, really depends on the Linux distribution," he said.

Hartley writes further, "With a rolling Linux distribution, on the other hand, you're able to tackle "upgrades" in nice bite-sized pieces. This means if you're updating your package management daily, odds are good your package cache is going to keep the older version of any newly upgraded package."

Back at openSUSE, the team says that the rolling release model will "shorten the stabilisation process" in openSUSE releases and eliminate the need for pre-releases or what are know as 'milestones'.

283px-Software_dev2.svg.png

NOTE: A milestone release is regarded to be a pre-Alpha software release version released to the community before complete (or even extensive) testing has been carried out -- milestones feature heavily in open source development as they refer to periods in development where specific functionality has been incorporated into the software project.

What used to happen

In the old openSUSE development model, an army of packagers would shoot new packages and updates to a playground called Factory, with a relatively small team taking care of the integration process of all those packages, which sometimes took a long time to stabilise and release.

What happens now

The new Factory model balances responsibility among packagers, testers and end users while putting more emphasis on automated quality assurance.

"With this new openSUSE development model, users get the latest free software packages without waiting for the next release," said Richard Brown, openSUSE board chair.

"With a daily fresh Factory distribution making it easier for those who want to preview and test, we hope to see more users and contributors, leading to faster fixes and even higher quality. Factory is critical as it provides the base technology for openSUSE and SUSE Linux Enterprise, which is used by tens of thousands of organizations around the world."

Image credits: http://www.freshdairy-daima.com/

Will SAP HANA developer tools actually now code 'for the cloud, in the cloud'?

bridgwatera | No Comments
| More

Like a lot of companies who appear to have experienced an 'open epiphany' somewhat akin to the sudden legitimatisation of green politics in the heady days of Thatcherism... SAP has also pledged its troth to at the holy alter of openness in recent years.

sap-hana-sps08-sap-hana-answers-1-638.jpg

July finds the Walldorf, Baden-Württemberg based firm detailing its new "sponsorship" (Ed - we used to call it support didn't we?) of two key open source communities:

• Cloud Foundry, the open Platform-as-a-Service (PaaS) and,
• OpenStack Foundation, the open Infrastructure-as-a-Service (IaaS).

"That's very nice, thank you," said the community -- but of more immediately tangible substance perhaps is news of SAP's new cloud-based developer tools:

• SAP HANA Answers and,
• SAP River Rapid Development Environment.

"The developer and open source community are key to breakthrough technology innovation," said Bjoern Goerke, executive vice president, products and innovation technology, SAP SE.

Thanks Bjoern, we agree -- but can we actually talk about "coding for the cloud in the cloud" now?

"Through the CloudFoundry and OpenStack initiatives, as well as new developer tools, SAP deepens its commitment to the developer community and enables them to innovate and code in the cloud."

Oh okay thanks, that's what we wanted to know.

The Computer Weekly Developer Network spoke personally to Microsoft's Scott Guthrie on this point at the end of last year and (as fabulously splendid a code guru as Guthrie unquestionably is) he was more sketchy about this subject at the time.

1 SAP cup.jpg

Systemanalyse und Programmentwicklung Gesellschaft mit beschränkter Haftung (or SAP to you and me) meanwhile says that in recent months SAP announced the code contribution and availability of a Cloud Foundry service broker for SAP HANA.

NOTE: Developed in close association with Pivotal and available as open source on GitHub, the service broker will allow any Cloud Foundry application to connect to and leverage the in-memory capabilities of SAP HANA.

SAP also recently announced an agreement with Databricks -- the company founded by the creators of Apache Spark -- to deliver a Spark distribution for integration with SAP HANA platform that is based on Apache Spark 1.0.

The firm also contributed key portions of the SAPUI5 framework as open source code on the GitHub site under an Apache Version 2.0 license.

"SAP HANA Answers is a knowledge-hub website for -- via the SAP HANA Answers plugin, the site is directly accessible from the SAP HANA Studio, an Eclipse-based integrated developer environment (IDE) for administration and end-to-end application and content development for SAP HANA," said the company, in a press statement.

As part of what it claims to be its "rich developer experience" (Ed - thanks for that PR team), in June the company announced the beta release of SAP River RDE as part of SAP HANA Cloud Platform.

Is SAP's cloud developer message squeaky clean and better than most?

No -- none of the major vendors has a flawless argument to put forward, but this announcement looks, for the most part, "tangible with real tools and resources" ... and that is this is the best barometric gauge we can use at this time.

6.JPG

Is Microsoft openness a wolf in sheep's clothing?

bridgwatera | No Comments
| More

Higher profile members of the key software application developer press were invited to the Microsoft Build conference this year to listen to the company's vision for where its ecosystem will develop over the next decade.

Listening from afar, it has been fascinating to see how Microsoft's conceptualising under the new grande fromage Satyo Nadelli has been analysed by the press and the programmer community.

Reporting for Time Magazine, Harry McCracken said that it felt like a 21st-century take on "Windows Everywhere" ...

Microsoft Build attendees were able to learn how the firm is aiming to proffer forth Windows development channels for not just desktops but tablets, phones and the Xbox One extending onwards into Internet of Things technologies.

McCracken's analysis is positively sceptical (as, arguably, most Microsoft analysis should surely be today), but more cutting perhaps was Jim Lynch this month on IT World.

Lynch highlights the fact that Nadelli is doing what the bulky showboater ex-CEO Steve Ballmer could never do i.e. let go of Windows as the centre of Microsoft's universe.

Image credit: looneyartist.deviantart.com/

1atch_dogged_wolf_by_looneyartist-d6ieae3.png

"Instead, Microsoft will be satisfied if people are using services like Office 365, Skype, OneDrive and Bing, whether they're on an iPhone, Android device or Windows PC," writes Lynch.

Is Microsoft really that wiling to change?

Is Microsoft (as a huge aircraft carrier of a business) really that nimble and agile?

Is Microsoft really that open in the face of a long pedigree and product line hinged around proprietary technologies?

Is Microsoft really clear is in its message sets delivered to the software application developer press?

Is Microsoft strategising to tow the openness party line and talk about wider product and platform development streams while all the while consolidating upon its core business model all the way to the bank?

We could not possibly say.

Open Interconnect Consortium: can your fridge talk to your toaster yet?

bridgwatera | No Comments
| More

Apologies for the deliberately tabloid headline, but here's the point: a group of industry vendors has formed the Open Interconnect Consortium with the aim of advancing interoperability for the Internet of Things (IoT).

1 open.png

So... that would be SMEG for fridges, Dualit for toasters, Bosch for fridges and boilers and perhaps Fitbit for wearables would it?

Nothing quite so consumer-tangible at this stage we're afraid.

No matter though, the company's involved here are Intel, Atmel, Broadcom, Dell, Samsung and Wind River.

The new consortium will seek to define connectivity requirements to ensure the interoperability of billions of devices projected to come online by 2020 -- from PCs, smartphones and tablets to home and industrial appliances and new wearable form factors.

http://www.openinterconnect.org/

The Open Interconnect Consortium (OIC) intends to deliver:
• a specification,
• an open source implementation,
• plus also a certification program for wirelessly connecting devices.

The first open source code will target the specific requirements for smart home and office solutions, with more use case scenarios to follow.

doug-fisher_2.jpg

"The rise and ultimate success of the Internet of Things depends on the ability for devices and systems to securely and reliably interconnect and share information," said Doug Fisher, Intel corporate vice president and general manager of the software and services group.

"This requires common frameworks, based on truly open, industry standards. Our goal in founding this new consortium is to solve the challenge of interoperable connectivity for the Internet of Things without tying the ecosystem to one company's solution."

What do the non-partners think?

On the news, Steve Nice, CTO of Reconnix, an open source software specialist, made the following comment:

"The Internet of Things is the next frontier for the technology industry, but there is still a lot of uncertainty around how it will work in practice. Two rival groups working on a set of standards now would suggest that we are still some years away from mass adoption. History has taught us that consumers rarely win if they are to back early in a 'format war,' just ask anyone with a Betamax recorder in the attic.

"It's important that both The Allseen Alliance and Open Interconnect Consortium projects are open source. The Internet was built on free and open principles, and trying to build a proprietary framework would simply inhibit innovation. The future of the Internet and the cloud is open source, you only have to look at Cisco's recent massive investment in open cloud infrastructure for evidence of that."







DevOps is inherently open source, discuss

bridgwatera | 1 Comment
| More

DevOps has (arguably) a lot of guff, fluff and puff attached to it right now.

We're still not sure if this portmanteau-propelled "coming together" of two core technology disciplines is really one new perfectly formed beast.

Is it Ops that have gotten good at Dev... and so progressed onwards (Ed - that never happens surely?) or Devs that can handle a bit of Ops?

Is it really one person?

Or is DevOps actually a movement, a cultural approach and a method?

So DevOps is actually 2, 4, 8, 16, 32 or 64 people and so on.

While we're ranting... shouldn't we also argue that DevOps has true open source roots?

The argument goes as follows....

DevOps (developer-operations) was born out of the FOSS (Free and Open Source Software) by its very nature because it aims to address the "incongruous nature of integrating traditional LOB applications" with other applications.

1 redpixie.png

This is the view of Paul Greer, chief architect and co-founder at RedPixie -- a British technology firm, which specialises in transforming IT environments.

"DevOps has a lot to do with automating and repeating, a practice that grew significantly with the widespread adoption of free tooling and frameworks that were built by the open source community," said Greer.

"This became popular in the early 2000s with the automation of software builds but now encompasses platform provisioning as well as software deployment," he added.

Greer goes on to argue proprietary tooling may provide some benefits in organisations that have standardised on a single vendor stack, but even these tools would be short lived if they prevented DevOps from controlling other vendors stacks.

"Line of Business application vendors products are changing through open interfaces and cloud based hosting which negate the customer from having to concern themselves with platform provisioning," he concludes.

DevOps is open source, or the pure bits are at least -- the debate continues.

No silver bullets in virtualisation & containerisation (even with Docker)

bridgwatera | No Comments
| More

Docker isn't actually everywhere, but the open source software designed to allow a Linux application (and its dependencies) to be packaged as a container has enjoyed massive success recently.

Search AWS recently reported that, "AWS Elastic Beanstalk has updated its support for a Linux container that experts say could grow into a new standard for application portability among Linux servers."

That Linux container, is, logically then, Docker.

Docker-logo-011.png

Software application developers can package applications using Docker version 1.0 "on their own" so-to-speak -- or, equally, they have the option to provide a text-based Docker file with instructions on how to create an image.

Container-based virtualisation techniques employed with Docker work to isolate software applications from each other on a shared operating system.

Containers are portable across different Linux distributions and so, logically then, the software applications themselves are able to run in any Linux environment.

What does Docker compete with?

So you would naturally expect Docker to compete with some pre-existing technologies and it does -- it aligns up against proprietary application containers such as VMware vApp technology and infrastructure abstraction tools like Chef.

Principal consultant at Cigital is Paco Hope.

Hope reminds us that Docker is cross-platform, allowing developers to target Mac, Windows, and Linux easily.

He asserts that allows developers to package up all the various libraries, bits and pieces that are necessary (without requiring a user to download and install them all) and -- it also "should have" the security benefit of being a sandbox that can't be escaped.

But an exploit was released recently that allows code that is supposed to be contained inside a Docker container to access files in the operating system where it is running.

A developer who sends you a docker-based application, could actually get files off your PC, even though that's supposed to be prevented by Docker's technology.

Hope explains the situation is full below:

"Much in the way that mobile devices can be jailbroken, virtualisation containers of all kinds can be susceptible to malicious code. Multi-tenant computer systems resemble multi-tenant buildings in real life: Often the defences that protect one tenant from another are much weaker than the defences that protect all tenants from random outsiders. Any part of the application that is virtualised this way is immediately less trustworthy than it would be running on a company's own servers. Software designers must consider malicious containers when designing security controls, despite the fact that virtualisation might be improving security in many other ways. This is a bug we should expect to be fixed quickly, it's not a flaw in virtualisation. Virtualisation and containerisation are generally good things. No technology is a security silver bullet, however."

Transactional in-memory analytics & grilled cheese sandwiches

bridgwatera | No Comments
| More

There's a lot of Spark around this week.

Well, it is the Spark Summit 2014 after all -- Apache Spark is a Hadoop-compatible computing system for big data analysis through in-memory computation with "simple coding through easy APIs" in Java, Scala and Python.

Alteryx and Databricks are collaborating to become the primary committers to SparkR, a subset of the overall Spark framework.

7-andre-grilled-cheese-400.jpg

In addition, the firms are partnering to [attempt to] accelerate the adoption of SparkR and SparkSQL, in order to help data analysts get greater value from Spark as (it says here) "the leading" open-source in-memory engine.

Apache Spark, an open source data analytics framework, has quickly been gaining traction for its fast and scalable in-memory analytic processing capabilities inside and independent of Hadoop.

SparkR is an R package that enables the R programming language to run inside of the Spark framework in order to manipulate the data for analytics.

"The collaboration between Alteryx and Databricks will foster faster delivery of a market leading in-memory engine for R-based analytics within Hadoop that is available for the Spark community," said the companies, in a joint press statement.

DataStax is also present -- the distributed database management system for Apache Cassandra announced its Enterprise 4.5 edition.

"Spark and Cassandra form a natural bond by combining industry leading analytics with a high-performance transactional database," said Arsalan Tavakoli-Shiraji, head of business development, Databricks.

Tavakoli-Shiraji (Ed - was doubled barreled a good idea?) insists that today we need a unified platform for in-memory transactional and analytical tasks with:

• enterprise search,
• security,
• grilled cheese sandwiches,
• in-memory and,
• analytics.

NON-TECHNICAL NOTE: Please do not mix grilled cheese recipes with transactional or analytical workloads, we just threw that in to see if you were listening.

DataStax Enterprise 4.5 adds a new Performance Service to "remove the mystery" of how well a cluster is performing by supplying diagnostic information that can easily be queried.

Also of interest here there is integration of Cassandra data alongside Hadoop - so developers can run queries across both transactional data that has just been created and historical data based on Hadoop.

Plus also ... there are more visual management tools for developers, particularly around the diagnostics side of things - this opens up Cassandra for more testing and understanding of app performance, rather than being a "black box".

BYO-LHC: Bring Your Own Large Hadron Collider

bridgwatera | No Comments
| More

Rackspace's involvement with OpenStack and CERN at the Large Hadron Collider surfaced again late last month when the cloud hosting provider staged a London-based gathering to discuss what, when and where its cloud hosting intelligence is being deployed.

1 CERN_LHC_tunnel.jpg

Computer Weekly has already detailed the following case study explaining the work that has undertaken here: CERN adopts OpenStack private cloud to solve big data challenges.

Embarrassingly parallel

Group leader of the OIS group within the IT department at CERN is Tim Bell -- he's basically the guy that looks after the tech infrastructure for CERN

Talking about the software application development work that is carried out at CERN, Bell explains that the systems today "suffer" from being embarrassingly parallel.

In parallel computing, an embarrassingly parallel workload, or embarrassingly parallel problem, is one for which little or no effort is required to separate the problem into a number of parallel tasks.

"This is essentially where High Throughput Computing (HTPC) comes into play as opposed to High Performance Computing (HPC)," explains Bell.

"That is to say, a lot of compute tasks have to be carried out, but they are all executed independently," he added.

So is CERN worth learning from?

From a developer perspective, CERN uses OpenStack and codes some of the open IP into into its own IT stack ... the end result is that some of the resultant IP is contributed back to the community but some of it isn't.

"Everything [code wise] that we identify that is of interest to the community we contribute back -- that which we deem to not be of interest we make available, but do not actually contribute back," explains Bell.

The code that runs the Large Hadron Collider is therefore available and this this gives rise to our acronym of the day...

BYO-LHC: Bring Your Own Large Hadron Collider

The (Organisation européenne pour la recherche nucléaire) isknown as CERN.

Microsoft gets it so right... and so wrong

bridgwatera | 2 Comments
| More

The programming-specific press is in something of a maelstrom over the highs and lows of what Microsoft does so (arguably) well... and what the company still gets so (arguably, arguably) well, just a bit wrong.

The firm's "oh alright then we like open source after all if everyone else does" stance was perhaps embarrassingly slipshod to start with.

But initial cheesiness has arguably been all but eradicated by:

a) The leadership of the (arguably) very excellent Jean Paoli as president of the Microsoft Open Technologies initiative.
b) Microsoft's open embrace of open cloud
c) Microsoft's serious approach to Hadoop, Drupal, Pyhton, Node.js
d) Microsoft open sourcing more of its .Net developer framework and a wider open sourcing across its programming languages overall.

On point d) in particular -- the crème de la crème of the planet's software application development journalist community were invited to Redmond in April for the Microsoft Build 2014 conference to hear news of the company partnering with Xamarin, a move set to create a new .Net Foundation with a more open source outlook overall.

Yay for Redmond

Dr Dobb's Journal meanwhile was full of plaudits for Microsoft this month with an editorial leader entitled Redmond's Remarkable Reversal.

Editor Andrew Binstock writes, "Many factors have contributed to Redmond's surprising success, but two in particular stand out: Microsoft embraced the cloud early and vocally, and it began delivering new software releases much more quickly."

He continues, still positive and upbeat, "In Visual Studio for the cloud, Microsoft is putting itself on the cutting edge of development by inviting programmers to explore a completely new way of coding and product delivery."

... and yet so wrong at the same time

So Microsoft is wonderful after all then?

Even Windows 8 is going to get a start button back (another treat the lucky Build press got to hear about), a process that Tim Anderson called part of a "painful transition" ...

... although this (as Anderson points out) will still not fix the drought and famine the world currently experiences for full 'Metro'-style applications.

Boo hiss, nasty Redmond

Mike James on i-programmer isn't happy either.

James bemoans the reticence, caginess and ok then downright old stubbornness Microsoft has exhibited over its refusal to open source VB6.

VB6 (or Visual Basic 7) is programming language and IDE (Integrated Development Environment) that dates back to the heady CD-ROM centric days of 1991.

Today the Visual Basic 6.0 Resource Center is mainly focused on selling your migration and "upgrades from" than championing that which was once much loved.

James bemoans the fact that Microsoft "killed" VB6 but now refuses to open source the language despite the firm's "warmth" for open source.

"Now that they no longer have any interest in it one way or another, and with a new commitment to open source, why not let the community have VB6?"

He continues, "You could say that it occupied the position that JavaScript does now - misunderstood, misused and commonly thought to be ugly and inadequate. However, used correctly it could be simple, clean and elegant. After all it was the driving force behind VB .NET which took the language in a different direction while trying to maintain its easy-to-use aspects."

Do programmers still really love Microsoft then?

It's hard to say -- your erstwhile reporter last attended a Microsoft developer convention in 2005 and the crowd went wild for Vista.

Who knows?

Maybe they were still whooping over the Bill Gates & Napoleon Dynamite video that was shown on the day.

Bill Gates Goes to School with Napoleon Dynamite from Angela Marie Baxley Glass on Vimeo.








How websites are smarter in the background than you thought

bridgwatera | No Comments
| More

Basle-based open source web content management system (WCMS) company Magnolia International has released the 5.3 version of its core product with functionality now delivered through a series of task-focused apps.

For software developers, this latest version opens up the firm's overall Magnolia App framework to integrate third-party software and devices and enterprise data sources

More targeted web experiences

preview-app.2014-06-24-12-14-58.png

Magnolia Co-founder Boris Kraft claims that Magnolia 5.3 was inspired by the changing needs of customers to provide more targeted web experiences that use existing repositories of enterprise data.

"The innovation in this release has been driven by our customers and their need to track, enhance and organise every online user interaction through a simple-to-use content management system," he said.

Personalised and targeted

The personaliation tools included with Magnolia CMS 5.3 allow users to segment their online audience so that content can be personalised and targeted to the needs (and driven by the behaviour of) each individual site visitor.

Personalised experiences are created and managed with a suite of apps -- there are individual apps to create content variations and for marketing segmentation of visitor groups, developing personas and previewing content for different personas.

For developers, the modular system simplifies integration of external software and services, allowing them to hook into different stages of the personalisation process.

Social listening tools

teaser-tags-manager.2014-06-24-12-15-05.png

Magnolia Tag Manager allows marketing teams to add, manage and remove tags for web analytics, marketing automation and social listening tools in with a single interface.

Magnolia 5.3 also introduces an improved DAM API making it possible to plug in external asset providers such as Flickr, YouTube or a file system.

Magnolia now provides a centralised repository for Magnolia approved applications, modules, source code and partner contributions -- and the Magnolia AppFinder is available to all customers and community contributors.

Smartphone simplicity

Magnolia CEO Pascal Mangold explains that Magnolia is an open Java CMS that delivers (what he calls) "smartphone simplicity" on an enterprise-scale.

"Magnolia CMS allows organisations to orchestrate online services, sales and marketing across all digital channels, maximising the impact of every touchpoint," he said.

Defence-grade fingerprint security on KNOX for Android mobiles

bridgwatera | No Comments
| More

Samsung Electronics and Google have teamed up to confirm that part of the Samsung KNOX technology will be integrated into the next version of Android.

The firm's KNOX Workspace aims to provide "hardware and software integrated" security for mobile devices.

1 igwduywgd.png

KNOX concentrates on multi-layered protection (that means from the device down to the kernel) with two-factor biometric authentication (that means numeric passwords plus fingerprint detection) for device access.

An enhanced element of the KNOX framework and Microsoft Workplace now join to provide users with a secure channel to corporate resources from mobile devices.

IT administrators will be able to use a separate container to manage and secure business data.

Samsung says that developers can extend their potential target market to a broader Android community with minimal implementation effort.

"Samsung has been pioneering to bring Android to the enterprise. We are grateful for their contribution to the Android open source project," said Hiroshi Lockheimer, VP of engineering, Android. "Jointly we are bringing enterprise-grade security and management capabilities to all manufacturers participating in the Android ecosystem."

Samsung KNOX is currently the only Android provider of defence-grade and government-certified mobile security complying with key US Government and Department of Defence (DoD) initiatives and other standards for mobile device security.

Samsung also offers a comprehensive KNOX management and application store service. In addition to the Samsung KNOX components found in this next generation Android platform, Samsung will keep developing specialised proprietary services such as KNOX EMM and KNOX Marketplace.

Linus Torvalds' open truths for developers (video)

bridgwatera | No Comments
| More

Linus Torvalds conducts an interview with the IEEE Computer Society to explain how he sits today in terms of his thoughts with Linux.

Torvalds is as humble and genuine as you might expect.

He explains that Linux "just did it differently" and explains how happy he is about "leaving something behind" that could change computing for everyone forever.

"Linux made it clear how well open source works, not just from a technical standpoint, but also from a business, commercial, and community standpoint," says Torvalds.

He also goes on to explain how happy he is about the success of the Git source control system.

Torvalds has some ideas for what is going to happen in the future and he is a quietly inspirational man.

It's 8:47 long and worth the investment.

Cisco open sources cloud-centric block ciphers

bridgwatera | No Comments
| More

Cisco is open sourcing block cipher technology to, the company hopes, better protect and control traffic privacy in cloud computing systems

What is block cipher technology?

Wheel_cipher.png

A block cipher is a method of encrypting text (to produce ciphertext) in which a cryptographic key and algorithm are applied to a block of data (for example, 64 contiguous bits) at once as a group rather than to one bit at a time.

Flexible Naor & Reingold

Cisco is creating the Flexible Naor and Reingold (FNR) encryption scheme which will exist under open source licence LGPLv2.

Cisco software engineer Sashank Dara has said that FNR is an experimental small domain block cipher for encrypting objects (< 128 bits) like IPv4 addresses, MAC addresses, arbitrary strings, etc. while preserving their input lengths.

"The demo application written is for encryption of IPv4 addresses (the cipher preserves their formats as well if needed). When FNR is used in ECB mode, it realizes a deterministic encryption scheme. Like all deterministic encryption methods, this does not provide semantic security, but determinism is needed in situations where anonymizing telemetry and log data (especially in cloud based network monitoring scenarios) is necessary," he said, in a Cisco blog post.

Importantly this is still an experimental block cipher, not ready for production yet.

Google open sources PDF rendering

bridgwatera | 1 Comment
| More

Google taken its PDFium software library forward into open source project status.

122d2efgh.jpg

PDFium is an open-source PDF "rendering engine" that will be folded into the Chrome browser.

"For contributing code, we will follow Chromium's process as much as possible," says Google.

Chromium is...

The Chromium projects include Chromium and Chromium OS, the open-source projects behind the Google Chrome browser and Google Chrome OS, respectively.

PDF rendering is the term used to describe the translation and transport of (usually) web-based pages into PDF format directly on-screen and (usually) for onward use as a saved file, for despatch to a mobile device or for printing.

Adobe-PDF-Document-icon.png

Foxit Software

Google has drawn on PDF the rendering expertise at Foxit Software to develop its own rendering engine in this case.

Chromium project evangelist Francois Beaufort has said that, "By open-sourcing Foxit's PDF technology, the Chromium team gives to developers a robust and reliable PDF library to view, search, print, and form fill PDF files. "

The code for this PDF tooling was (in parts) previously closed source and proprietary, so this open sourcing brings the total pedigree of the Chrome project into cleaner space.

There is no known monetary benefit to Google for taking this action.

Chrome senior software engineer Peter Kasting has explained that Google has long tried to ensure as much of Chrome as possible is available openly as Chromium.

Flash ---- ahhhh!

A number of elements (like the Flash and PDF plugins) stood out here as Google did not have a license to release them.

But now with PDFium says Kasting, one of those major moving parts is now open as well.

This is great for a lot of reasons in Google's view.

"It reduces the number of closed pieces of Chrome, and thus the surface area for which people can be suspicious that we're doing something shady. It makes a high-quality PDF plugin available to users who only want an open-source product and were using Chromium as a result. It is almost certainly the highest-quality PDF engine available in the open-source world, and can now serve as a reference for other projects, or be included in other browsers based on Chromium or other open-source projects entirely," said Kasting.

Find recent content on the main index or look in the archives to find all content.

Categories

Archives

Recent Comments

  • Computer Weekly: Well, i dont agree with your rage... You can cross-compile read more
  • S Ten: Microsoft's arrogance prevents them from open sourcing the VB6 programming read more
  • chris haddad: Leading DevOps tools (Puppet, Chef, Jenkins) and environments (LXC, docker, read more
  • Andrew Cowan: Microsoft are becoming more and more like a Jekyll and read more
  • Colin MacKellar: This is really good news. Do you know if it read more
  • Ashish Mohindroo: Check out a new Cloud Log Monitoring & Management Platform: read more
  • ccm12983: The Raspberry Pi is hard to beat, but it depends read more
  • Jessica Dodson: An open source policy helps ensure that all your bases read more
  • qaz wiz: even at that cut rate price this thing needs to read more
  • Evaldo Horn de Oliveira: Great post – there’s definitely truth behind the statement that read more

Recent Assets

  • insidebrackets.jpg
  • sonic-dash-150px.jpg
  • 1Unity_Pri-200px.png
  • 637px-Bug_blank.svg.png
  • retail_dashboard.png
  • 1 open sceu.png
  • 1There's_Something_Magical_About_Her_(7124551237).jpg
  • 1 lapse.png
  • 283px-Software_dev2.svg.png
  • fresh-family.png

-- Advertisement --