TIBCO EMEA CTO: where (fast data) advanced analytics goes next

bridgwatera | No Comments
| More

This is a guest post for the Computer Weekly Developer Network blog written by Maurizio Canton, CTO EMEA, TIBCO Software.


This post is focused on content from the TIBCO Fast Data Platform.

The team asserts that firms need to integrate so-called 'fast data' (i.e. high velocity data payloads being generated by applications, systems, processes, customers, partners and now the Internet of Things) into modern data systems to make the right information available at the right time, powering services and APIs, as well as automating processes.

The Fast Data platform claims to be able to 'empower a business' by identifying situations of interest as they occur -- opportunities like customer interactions and process optimisation (and threats like software exceptions and security breaches) providing real-time awareness.

Driving us over the edge

Blanket assumptions around driving etiquette have long been a moot point for those who feel penalised by high car premiums that tar a whole demographic with the same brush.

While young drivers have traditionally born the brunt of the excess, examples are just as prevalent at the other end of the age spectrum.

Hancock's half hoodwink

One recent and high profile case concerned 82-year-old actress Sheila Hancock and the £1,400 hike in her car premium in spite of an exemplary driving record, which led to renewed calls to address the entrenched ageism in the insurance industry that leads to the discrepancies in charges.

As such, it isn't surprising that technology capable of producing an integrated and accurate picture of a driver's performance to separate the fact from the fiction is gaining significant traction, signalling major repercussions for both the driver and insurance industry.

In a similar way that Hawkeye technology has ended the ambiguity around close calls on the courts at Wimbledon, sensors that track speed, braking, steering, and mileage and collate the data into one definitive bundle has made the same impact.

1 tiho.png

Usage-based insight

It's a level of insight based on habits, history, and degree of risk that can inform usage-based insurance and lead to more competitive and fairer premiums.

The technology that underpins it is evolving at a rapid rate. Not too far in the distant past, black boxes usually only entered our collective consciousness in the aftermath of plane crash, as the first port of call for investigators trying to establish the cause.

Now, fuelled by a European directive which is calling for a statutory black box style device in every new car, the technology is filtering down into more mainstream use to become part of a much broader rhetoric.

Shifting from a consideration that used to be a manufacturer's prerogative to something that will need to be fitted as standard by 2018, the latest solutions will feature even broader capabilities with the ability to automatically contact the emergency services in the event of a crash.


The next logical analytics progression

Indeed this type of forward-thinking is the next logical progression in advanced analytics, where predictive capabilities are increasingly taking centre stage.

Enabling the driver and insurer to better forecast certain occurrences to pre-empt breakdowns and reduce the risk of accidents is a core benefit. And this level of insight and intelligence is significantly powering and adding value to the traditional offering from insurers, a crucial intervention for an industry that has had to up its game in response to a competitive climate in which drivers with a far greater number of insurance options at their disposal need to be enticed.

Real world examples

A major consequence of this digital makeover has seen insurers working increasingly in close partnership with technology vendors, to drive innovation. It's an approach in evidence at TIBCO through its relationship with four of the five largest insurers that deploy the company's technologies to increase revenue, mitigate risks and improve operational efficiencies.

By working together to harness the full potential of sensor technology and embracing ever more predictive capabilities brings the potential to anticipate the type of incident a vehicle is most likely to be involved in, leading to the kind of interaction that will replace a flashing red warning light on your dashboard with intelligence fed straight through to the driver's dealer, alerting them to an issue that needs to be remedied.

Thanks to Fast Data, the road ahead has never been so clear.

Image credit: (woman with car) Confused.com

Developer antidote for Microsoft .NET -- SAP integration headaches

bridgwatera | No Comments
| More

In 2015, it's okay to start your company name with a smaller case letter and end with capitals.


Taking this message to heart, enosiX (pron: EN-OH-SIX) has this month come forth with the 2.0 version of its own software framework.

The product is a means of integrating with enterprise resource planning (ERP) software -- and the team has just achieved SAP certification for the SAP NetWeaver technology platform.

SAP's NetWeaver enables the composition, provisioning and management of SAP (and non-SAP) applications across a heterogeneous software environment.

Microsoft .NET connection

Through integration with the SAP NetWeaver Application Server component, the enosiX software is supposed to enable Microsoft .NET developers to create mobile applications that access back-end systems running SAP software.

The technology is rooted in both the Microsoft and SAP platforms, allowing the framework to manage integration from a .NET solution into SAP software.

CEO at enosiX Charles Evans insists that his software helps simplify how companies integrate SAP software into their mobility and integration projects.

80% of integration built in

"IT departments across industries are being inundated with business requests for mobile apps, and the resources available to fulfill these requests are limited. By tackling the SAP software integration process, with up to 80% of integration built in, enosiX Framework 2.0 enables these departments to take full advantage of the more plentiful .NET resources while expanding the bandwidth of developers highly specialised in the ABAP programming language," he said.

The argument here is... with less time spent on integration, experienced front-end developers can focus on product "experience" excellence for SAP customers.

1 enhiu.png

When software encryption fails, use a PIN number

bridgwatera | No Comments
| More

Malware, phishing, hacking, BYOD risks and security vulnerabilities of all kinds are becoming more sophisticated every day -- this we know to be true.


Equally, of course, the strength, robustness and resilience of encryption controls are increasing every day.

Yet still, software-based protection often fails us.

Fundamental finger-power

As a blog devoted to software application development and the mechanics of software engineering, we have to concede to being impressed with a piece of technology that relies upon a hardware-extension (if we can use that term for a PIN-entry number pad) for its power.

The ultra-secure portable datAshur SSD flash drive is a nice thing.

Military grade

When software encryption controls come into question and hackers still find their way in, doesn't a physical PIN number and military grade full-disk XTS AES 256-bit hardware encryption sound like a good idea?

CEO of the product's manufacturer iStorage is John Michael -- he explains that businesses and individual users are increasingly becoming targeted by threatening attacks that can have significant consequences and we continue to see new threats surfacing globally.

"The rise and proliferation of malware and other forms of cyberattacks is a growing concern for both consumers and organisations of all sizes, and leaves a question mark over certain data protection methods," said Michael.


So is there a case for software-free portable data storage?

"When we look at the ever-evolving threat landscape that lies ahead, there is a strong case for software-free portable data storage technologies that combine military grade AES 256-bit hardware encryption with on-board PIN activation such as the diskAshur Pro ultra-secure portable hard drive or the datAshur SSD flash drive that we have developed to ensure robust data protection," argues Michael.

He asserts that the need for high level hardware encryption and cross-platform compatible portable data storage devices has never been greater and iStorage deliver products that are ultra-secure - packed with security features, easy to use and that work on just about any USB device.

The product also has a 'Brute Force' hack defence feature; capacities of 30GB, 60GB, 120GB & 240GB; plus crypto-parameters protected with SHA-256 hashing

A DevOps periodic table of elements

bridgwatera | No Comments
| More

Oh goodness, not more DevOps spin is it?

A DevOps periodic table of elements?

Surely this can't be anything more than another case of DevOps-washing i.e. contrived puff and fluff from a 'tangential vendor' sitting not that close to the core tasks of DevOps who wants to sneak up to the general level of industry comment.

In this case the 'table' as it is comes from XebiaLabs, a company that specialises in Continuous Delivery and DevOps tools.

Unlike other players hurriedly jumping onto the DevOps bandwagon, XebiaLabs claims to have been an active participant in the DevOps community since the 'early stages'.

According to the firm, "XebiaLabs' solutions include open, integrated tooling all the way across development, QA and Operations, helping globally distributed teams build a shared, accurate picture of all the systems they are building and running. Replace handover moments with shared visibility and responsibility for the entire system lifecycle, and empower team members through simple self-service options."

In fairness, XebiaLabs does indeed do real DevOps.

You know what they say, if your DevOps capability toolset doesn't include quantifiable tasks metrics and call stack analysis technology, then it's probably spin, puff and fluff.

The table


What XebiaLabs has done is kind of interesting, it has grouped 'elements of DevOps' into categories and then provides colour-coded links to descriptive web pages which explain where they fall in the total DevOps process.

Categories INSIDE DevOps as noted here include:

  • Database
  • Continuous Integration
  • Deployment
  • Cloud / IaaS / PaaS
  • BI/Monitoring
  • Software Change Management
  • Repository Management
  • Configuration and Processing
  • Release Management
  • Logging (log management)
  • Build
  • Testing
  • Containerisation
  • Collaboration
  • Security

A link to a fully interactive full page version of the table is included here for your enjoyment.

Sexy cloud apps? Mendix makes aPaaS at it

bridgwatera | No Comments
| More

Mendix has updated its application Platform-as-a-Service (aPaaS) software to make it, well, sexier.

Sexier how?


The immodestly named Mendix Digital Experience (DX) release includes extensive pre-crafted UI templates for creating cloud applications faster, which is a good looking feature.

There's also OData support -- meaning 'open data'.

OData defines an abstract data model and a protocol that let any client access information exposed by any data source, so that's definitely attractive

There are also enhancements to the firm's online developer community; and an expanded Free Edition with full production capability.

New UI, what's inside?

The new UI Framework delivers a comprehensive set of UI patterns, themes, navigation layouts, and page templates. Using this framework, developers can create "pixel-perfect" fully responsive applications out of the box... says the firm.

According to a press statement, "Mendix now supports OData, an open protocol that enables simple creation and consumption of query-able and interoperable RESTful APIs for data. One click turns data into information by pulling live data from Mendix applications into BI and analytics tools, such as Tableau, SAS, R and Excel. A new streaming query mechanism features high performance and low memory usage."

The release offers an improved developer experience through upgraded sign-up flows, project wizards, instructional videos, how-to guides and an enhanced developer website.

These resources, collected in the new "Mendix Cookbook," help remove complexity, speed ramp-up time, and enable developers to focus on building applications that make a difference.

The appliance of converged infrastructure appliance, science

bridgwatera | No Comments
| More

OpenStack 'pure-player' Mirantis has launched Mirantis Unlocked Appliances, a portfolio of 'converged infrastructure appliances' built with its own version of OpenStack.

What are these things?


They are single (or multi-rack) converged infrastructure appliances delivered in a pre-validated, pre-integrated and pre-certified form.

But that still doesn't tell me what converged infrastructure appliances are?

Okay sorry, converged infrastructure appliances are pieces of software (in the form of so-called 'appliances') that go towards creating Virtual Machine (VM) technologies used in cloud computing for the deployment, configuration and management of the systems they serve.

What do they look after?

As we said -- deployment, configuration and management -- but if you want more colour there, it's tasks like patches, upgrades, scale-out movements (when the cloud instance has to grow) and all the way back to the initial instantiation of the operating system for the cloud deployment and decisions about security options etc.

In other words, everything that pertains to the compute, networking, storage and management concerns of a piece of cloud.

NOTE: Just once again in harmony this time, there is no ACTUAL cloud, it's servers located in datacentres that we have started calling cloud computing, remember.

Anyway, back to Mirantis... about 20 percent of infrastructure is "consumed through the appliance form factor" in this case, because it is easy to set up and operate.

This is the claim made by said Alex Freedland, Mirantis president and co-founder -- or at least it's the one that the PR agency convinced him to put his name to in a press statement.

"Mirantis Unlocked Appliances combines ease of use with the openness and flexibility of OpenStack, delivered as a cloud-in-a-box. Our first appliance focuses on the most common OpenStack use case - developing cloud-native applications - and will be built and shipped by Certified Rack Partners across the ecosystem," said Freedland.

Mirantis Unlocked Appliance for Cloud Native Applications is claimed to speed development and production deployments of cloud-native applications at scale.

The first iteration is powered by Dell and Juniper Networks, enabling agile development of cloud native applications and production deployments of container-based services.

Keegan shoots, he scores

In relation to the appliances announced here, senior analyst at ESG Colm Keegan says that many organisations are opting to deploy pre-integrated computing solutions, like appliances and converged infrastructure, as a way to speed up deployments, accelerate time-to-value and simplify operational management.

"By offering a pre-integrated and fully certified Open Stack appliance, Mirantis is enabling businesses of all sizes to eliminate much of the cost and time typically required to integrate Open Stack into a datacentre environment. Furthermore, by coupling the Mirantis Unlocked Appliance with the OpenStack Community Application Catalog, businesses can accelerate the development and deployment of their next generation cloud applications," said Keegan.

Cloud architecture means testing 'in the cloud' too, who knew?

bridgwatera | No Comments
| More

It couldn't be that logical, could it?

Cloud-centric software application development and the migration of IT shop operations to cloud-based environments demands that software engineerings teams also embrace the idea of using cloud-based testing tools.

(Ed - don't be ludicrous, no wait, really?)


Mike Cooper is an quality-focused IT testing guru now working with QASymphony in an advisory role.

Cooper explains that he recently began advising one of the world's largest employers on its QA and software testing strategy.


The CIO was planning to move the entire ops platform to the cloud in a multi-year, multi-million pound effort involving dozens of people and hundreds of man hours.

"The company's hotshot dev team was using an Agile dev methodology and DevOps approach with continuous integration. Unfortunately, the company's current testing team was primarily comprised of old school testers using Word and Excel to manage test plans and test cases," said Cooper.

Given the massive scope and scale of this business-critical initiative, Cooper recommended an Agile testing approach involving highly skilled testers embedded in Scrum teams.

He also recommended the use of cloud-based tools for testing mobile apps, security, performance and localisation.

"The exec team was in shock for 10 minutes as they wrapped their heads around what I was saying. It hadn't occurred to them that cloud and Agile also demanded a new, futuristic platform for testing. Meanwhile, the dev team was celebrating," he said.

"To me the future of test is very clear: Agile methodologies combined with sophisticated cloud-based tools," added Cooper.

NOTE: QASymphony offers qTest eXplorer (among other core products) as a test case management documentation tool that supports exploratory testing and saves time when performing traditional manual testing

A word from the CEO, via his PR team

QASymphony CEO Dave Keil insists that today, testing is seen as a cost centre in many organisations.

"Companies don't necessarily see the value of testing until something breaks in production and the business is impacted as a result. In the future, we believe testing will get much smarter. At QAS, we're doing a lot of work on the ability to transform the historical data we collect during testing and turn that into actionable insights for the company. So rather than reacting when something breaks, IT leaders will be able to identify potential issues before they happen. Instead of a cost centre, testing will become a value provider."


How 'social' peer benchmarking between applications makes better software

bridgwatera | No Comments
| More

Software analysis and measurement company Cast that spells its name CAST in an attempt to gain extra kudos but it doesn't actually stand for anything in terms of it being a valid acronym so it just looks (arguably) silly has updated its product set.


The firm's new version of Highlight bids to analyse 'complex portfolios' of enterprise applications to identify areas of concern.

The software itself uses a benchmarking system to assess software risk and complexity.

Key new enhancements include:

• Benchmarking against peers -- Application key risk indicators can be benchmarked against a repository of 650+ anonymised custom enterprise applications, pulled globally from across all Highlight instances.

• Faster, more in-depth analysis -- Highlight's new agent scans deeper, wider and with more configuration flexibility, making risk profiling and cost saving easier than ever thanks to a "bubble diagram" user interface.

• Better technical debt estimates -- Highlight now delivers more pragmatic, quantifiable technical debt estimates than ever before. As a result, these estimates are more reliable, delivering actionable analytics to make fact-based decisions on which applications are most effective from a cost/benefit perspective.

• More accurate cost and effort calculations -- Using the industry-standard COCOMO model, Highlight provides maintenance effort estimates in terms of Full Time Equivalent (FTE) employees.

Prioritise and rank

The result, theoretically, is a situation where developer teams can prioritise and rank projects and programs, based on tangible data, more accurately.

"Previously, strategic IT initiatives were notoriously difficult to rank; a lack of visibility on costs and technical risks, together with competing demands on budget from within IT and other departments, hampered decision-making," said the company, in a press statement.

This software is ISO 27001-certified, therefore it complies with the industry-wide standard on information security management systems.

"As organisations engage in public-facing transformation initiatives, gaining visibility into and measuring the quality, risk and complexity of their application portfolio become more vital than ever," said IDC analyst Melinda Ballou, program director for IDC's Application Lifecycle Management and Executive Strategies Service.

Automatic for the WP Engine people

bridgwatera | No Comments
| More

WP Engine wants web developers to use its product, really, honest.


As such, the SaaS content management platform (for websites and applications built on WordPress) company (Ed - phew! long intro) has announced a new automated migration product.

The software itself is intended to be used for the migration of WordPress websites to WP Engine's managed WordPress hosting platform.

WP Engine Automated Migration is available now as a 'plugin' piece of web software.

The tool claims to "cut out" the most technical steps it (typically) takes to fully migrate a site to WP Engine.

NOTE: The time it takes to complete a migration can be as little as 30 minutes.

"The tool reduces the costs typically associated with a full site migration and eliminates the need to pay an additional vendor to move your site from one platform to another," said the company, in a press statement.

What is application retirement?

bridgwatera | No Comments
| More

Application retirement is a thing.

Of course it is, software applications don't live forever, not even legacy ones.


It is sometimes also known as application decommissioning, application sunsetting, application neutering, application big-banging or application euthanasia.

Circle of life

It's just one of the facts of life inside the so-called Application Development Lifecycle.

As IBM teaches us...over time, applications can outlast their value to the business, eventually costing more to maintain than they are worth.

"But companies are reluctant to retire obsolete, legacy, or redundant applications for fear they may someday need the underlying data. As a recommended best practice, organisations must evaluate application portfolios regularly to determine whether their investments are delivering maximum business value," says Big Blue.

Sexy time!

Who talks about application retirement?

Well Gartner has formulated what could possibly be its sexiest Magic Quadrant yet to celebrate this aspect of technology -- the Gartner Magic Quadrant for Structured Data Archiving and Application Retirement.

(Ed - ouch, sizzle!)

Celebrating its newly endued status inside this holiest of holy quadrangles this month is Informatica.

The firm's Amit Walia, senior vice president and general manager for data integration and security is clearly enthused.

"As organisations 'clear their decks' of legacy applications, they need unwavering confidence that the data remains readily available and is archived in a secure and cost-effective fashion," said Walia.

He continues, "Informatica Data Archive provides customers with an unrivalled range of advanced capabilities for performance optimization, application retirement, big data analytics, data security, retention management and compliance."

According to the Gartner report on this topic, structured data archiving addresses storage optimization, governance, cost optimization and data scalability.

"It (storage optimization) can reduce the volume of data in production and maintain seamless data access. The benefits of using this technology include reduced capital and operating expenditures, improved information governance, improved recoverability, lower risk of regulatory compliance violations, and access to secondary data for reporting and analysis," says Gartner.

The report projects that, "by 2017, archiving in support of big data analytics will surpass archiving for compliance as the primary use case for structured data archiving." It also predicts that, "by 2016, 75 percent of structured data archiving applications will incorporate support for big data analytics."

The report's authors also note that, "the desire to leverage archives as a secondary data store for big data analytics is driving the growth of the structured data archiving market." They continue to say that, "the growing use of Apache Hadoop, increasing data warehouse volume sizes and the accumulation of legacy systems in organizations are fostering structured data growth. These factors are leading enterprises to understand how to reuse, repurpose and gain critical insight from this data."

Retirement comes to us all, let's be respectful -- ok?

Asigra: how real cloud backup works

bridgwatera | No Comments
| More

The Asigra Cloud Backup Partner Summit 2015 runs this week in Toronto.

As a name, Asigra comes from the Spanish infinitive asegurar, meaning to assure.


Asigra's branded 'Cloud Backup' architecture combines a scale-out architecture, a cloud backup and recovery software platform and a cloud API and management system.

Scale factor

Essentially we're talking about providing, managing and (crucially) scaling data protection services.

Where do we need cloud backup?

This event was populated by Asigra partners, who can typically be described as specialists in cloud backup and recovery, obviously -- but specifically these are firms who work to protect data in the datacentre on physical or virtual servers, enterprise databases and applications...

... but the protection factor we need to think about here goes further i.e beyond the datacentre onto:

• desktops,
• laptops,
• smartphones,
• tablets,
• in SaaS-based applications like Microsoft Office 365, Google Apps, Salesforce.com,
• and in IaaS-based platforms like AWS or Microsoft Azure.

Keynote commentary

How do you get more cloud backup into the hands of your targeted users then?

The company ran a schedule of sessions with titles such as: how to competitively position your cloud backup service with (i.e. against) traditional enterprise datacentre solutions.

CEO and founder of Asigra David Farajun explained that we should remember how cloud backup is in fact the second element of the cloud data spectrum (and indeed his firm's total Asigra recovery product spectrum:

1. VM Replication (DAS, NAS & SAN)
2. Backup / snapshot technology (file and object storage)
3. Archive (disk and tape)

A Bloomberg report from 2011 details the origins of Asigra as follows:

Founder David Farajun started Asigra in 1986 after a hard-drive failure doomed his previous company, which was building an operating system. With few options to save and recover his own files, Farajun founded Asigra to fill a market need and help other companies avoid the same fate. His early clients would transmit their data, using 300 baud modems, to a secure vault, where Farajun had stacks of foot-long hard discs that could store 10 megabytes -- a huge amount of data in the 1980s, now about the equivalent of a few MP3 song files.

A justification for cloud backup?

Gartner analysts spoke at this event to comment on suggestions that firms are trying to now "renovate the core of the business to cloud but still keep the lights on" today.

Analyst Tiffani Bova spoke at this event to suggest that when the Internet of Things works at full pace, we get to a point where products and services start to order themselves (at both a consumer and business level) automatically based upon user preferences...

... and this justifies the need for more cloud backup going forward as we start to become more reliant upon the cloud (and cloud-driven mobile devices) in our lives.


Editorial disclosure: Asigra paid for a proportion of Adrian Bridgwater's travel expenses.

Checking (cloud) backups with BackupChecks

bridgwatera | No Comments
| More

We all love the cloud computing model of service-based IT delivery, obviously... but what about backup and failure scenarios?

There is a palpable sense of the cloud market now laying down more cloud backup technologies.


Indeed, this week sees the Asigra Cloud Backup Partner Summit 2015 in Toronto.

If you weren't familiar with the term... DRaaS - disaster recovery as a service, then now is the time to fix that.

TechTarget defines DRaaS as the replication and hosting of physical or virtual servers by a third-party to provide failover in the event of a man-made or natural catastrophe.

At the Asigra event itself today we find the launch of BackupChecks from Databarracks.

Peter Groucutt, managing director at Databarracks, says his team has been developing this software for the last seven years.

"The features of the software are three-fold. A service desk, a reseller management portal and a customer management portal. The most obvious benefits are seen in the Service Desk, which allows technicians to manage all of their backup accounts through a single portal. The dashboard shows engineers all of the errors to be checked across all accounts at a glance, any recurring issues, the amount of data stored by each account and information to help services providers manage their DS-Systems," said Groucutt.

BackupChecks automates the daily management of backups so, in theory, backup engineers have more time then to provide more support to their customers.

They have the tools to have helpful data reviews with customers and give advice on best practice from other customers and their internal knowledge base.

"Through the portal, customers have access to all their vital backup stats, such as how many backups have been successful, how many restores have been made and how much data they are storing. Customers can log in to a simple portal on their mobile on the way into the office to check on their overall backup health and then log in on their desktops for a more comprehensive view to really drill down into the details and their historical backup and recovery trends," added Groucutt.

Nutanix offers visibility into invisible infrastructure

bridgwatera | No Comments
| More

Cloud storage and operating system software company Nutanix has used its inaugural user event to launch its Xtreme Computing Platform (XCP).

Nice name, sure... but what does it do?


The firm has concocted a 'message set' hinging around the suggestion that we call this technology something called 'invisible infrastructure', no less.

The concept here is... cloud infrastructure that you (being that "you" could be a whole development team) don't need to worry about.

What lies beneath

Consequentially then, the team can focus on applications and services --- not what lies beneath.

Inside this technology proposition we find two product families;

• Nutanix Acropolis
• Nutanix Prism

The XCP product set is intended to extend the firm's hyperconverged approach to enable application independence from infrastructure with an advanced app mobility feature.

Also here we find native virtualisation and consumer-grade search capability.

Dheeraj Pandey, CEO and founder of Nutanix argues that today, many business applications run on traditional storage and virtualisation products that are time consuming to deploy, expensive to manage, difficult to scale and challenging to migrate from.


"Nutanix XCP makes the entire infrastructure lifecycle invisible and diminishes the innovation and financial burden borne by users of existing datacenter solutions. The most transformative technologies are the ones we don't even think about," he said.

What could this mean?

They work all the time, scale on demand and self-heal. In other words, they are invisible - this is the Nutanix proposition.

"Building on our foundations of web-scale engineering and consumer-grade design, we will make virtualisation as invisible as we've made storage and elevate enterprise IT expectations yet again," said Pandey.

"With a 52% market share in the hyperconverged infrastructure market, Nutanix has demonstrated its ability to radically simplify data storage for all size enterprises," said Matt Eastwood, senior vice president of IDC. "Its next big opportunity is to tackle the inherent cost and complexity of legacy virtualisation stacks, and elevate IT teams so they can focus on driving the business."


Nutanix Acropolis builds on the core capabilities of the company's hyperconverged product to incorporate an open platform for virtualization and application mobility.

This product offers teams the flexibility to choose the best application platform technology for their organisation - whether it is traditional hypervisors, emerging hypervisors or containers.

So then in turn... under Acropolis, infrastructure decisions can be made based on application performance and scalability and economic considerations, while allowing workloads to move more without penalty should requirements change.

Nutanix Acropolis is comprised of three foundational components:

1. Distributed Storage Fabric -- Building on the Nutanix Distributed File System, the Acropolis Distributed Storage Fabric will enable common web-scale services across multiple storage protocols.

2. App Mobility Fabric -- This is a newly-designed open environment capable of delivering virtual machine (VM) placement, VM migration, and VM conversion, as well as cross-hypervisor high availability and integrated disaster recovery.

3. Acropolis Hypervisor -- While the Distributed Storage Fabric fully supports traditional hypervisors such as VMware vSphere and Microsoft Hyper-V, Acropolis also includes a native hypervisor based on the proven Linux KVM hypervisor.

Nutanix Prism

Prism features One-Click technology that streamlines time-consuming IT tasks, and includes one-click software upgrades for more efficient maintenance, one-click insight for detailed capacity trend analysis and planning and one-click troubleshooting for rapid issue identification and resolution.

Nutanix Prism delivers better value to IT administrators as a result of convergence of storage, compute and virtualization resources -- advanced machine learning technology with built-in heuristics and business intelligence -- and what the firm calls true consumer-grade user experience with sophisticated search technology.

Nutanix Acropolis and Prism are available now.

Nutanix partner Veeam, a lean mean DRaaS dream?

bridgwatera | 1 Comment
| More

Datacentres need availability, obviously.

Veeam Software is a company that directly positions itself as a provider of solutions that deliver availability for datacentres.

The firm is serious about this -- so much so that it has bothered to trademark the term Modern Data Center™, spin, puff and fluff notwithstanding... somebody in marketing thought it was a good idea.

This week Veeam (pronounced: veeeeeeeem, not really, just kidding) has appeared at the Nutanix .NEXT conference in Miami to announce news of its Veeam Cloud Connect product now being extended to include advanced image-based VM (virtual machine) replication capabilities as a part of the new Veeam Availability Suite v9.

It's a teaser story - agh!

Disappointingly for users, the product won't be generally available until later in the year - but the company has promised not to seek further press coverage at its full point of release.

Interestingly for users, this kind of extended functionality is the type of technology that helps gives service providers the ability to provide cloud-based disaster recovery-as-a-service (DRaaS) -- this is achieved through the Veeam Cloud Connect Replication for Service Providers offering.

You mean you don't know what RTO means?

This is all about software tools designed to create replicas in the cloud.

Which, in turns leads us to protection for mission-critical applications.

Which, in turn leads us to better recovery time objectives (RTOs).

The company's technology proposition hinges around building a technology that securely bridges the Veeam customer to the service provider.

So, essentially, Veeam removes the requirement for customers to build and maintain a disaster recovery site for offsite protection, thereby theoretically offloading an amount of cost and complexity from their IT infrastructure.

Advanced image-based VM replication through Veeam Cloud Connect includes built-in multi-tenant support to securely share host or cluster CPU, RAM, storage and networking resource allocation between different tenants.

"It is critical to keep standby copies of data both on and off-site," said Ratmir Timashev, CEO of Veeam. "Veeam Cloud Connect not only enables our users to fulfill the offsite requirement without having to invest in offsite infrastructure or management, but also presents new opportunities for service providers to build recurring revenue from their existing customer base, expand their presence in the DRaaS market, and establish relationships with new customers."


Nutanix .NEXT day zero: what is web-scale, anyway?

bridgwatera | 1 Comment
| More

Nutanix invites the great and the good to Miami this week for its .NEXT user, partner, customer (and all round cloud storage and cloud OS cognoscenti) conference.

How is the cloud growing now?

1 hniotan screen.JPG

Company CEO Dheeraj Pandey says that we should be considering 'hyperconverged' today as a notion of making storage consumption (and the wider notion of its infrastructure) invisible to the IT operation.

Invisible cloud storage is easier to consume, obviously - and this leads us towards what Nutanix is very keen to label as so-called 'web-scale' technology.

What is web-scale?

We need to be careful with this term, it's easily overused or wrongly presented.

Web-scale is NOT simple web-centric apps that can run on new cloud services - as in:

Developer dude #1: Hey man, I just built a new rate my hot dog app and it's cool enough to put it out over the wires.

Developer dude #1: Cool bananas, so you're really going to push this thing web-scale then.

WRONG - DO NOT PASS GO this is not web-scale...well, not as such.

Web-scale is a global-class of enterprise computing.

Web-scale is, architecturally, that level of computing infrastructure (with the Nutanix notion of invisible cloud storage intelligence) that you would expect a) a large scale cloud provider to offer or b) a large enterprise to offer capabilities in.

Web-scale is scale beyond size, web-scale is scale in terms of service flexibility and compute agility.

Web-scale is a mechanical base of compute, storage and transport interconnectivity where firms concentrate on their runtime intelligence, their process rules and data model.

Web-scale is mighty damn big yet pretty nimble - get it?

NOTE: The above definitions are presented by the Computer Weekly Developer Network blog and are inspired and derived from the technology proposition that Nutanix is today putting forward.

Gartner has released an article saying that by 2017, web-scale IT will be an architectural approach found operating in 50 percent of global enterprises.

Nutanix staff author Andre Leibovici has the below (italics) to say on the terminology.

Web-scale IT is more than just a buzzword, it is the way datacenters and software architectures are designed to incorporate multi-dimensional concepts such as scalability, consistency, tolerance, versioning etc.

Web-scale describes the tendency of modern architectures to grow at (far-) greater-than-linear rates. Systems that claim to be Web-scale are able to handle rapid growth efficiently and not have bottlenecks that require re-architecting at critical moments

Web-scale architecture and properties is not something new and have been systematically used by large web companies like Google, Facebook and Amazon. The major difference is that now these same technology that allowed those companies to scale to massive compute environments are being introduced into mainstream enterprises, with purpose-built virtualization properties.

One final thought (although this story is really only just beginning) web-scale is not exclusively applicable to SDS (Software Defined Storage); rather it is an architecture model for very large distributed systems.

The (self-service) revolution will be analysed

bridgwatera | No Comments
| More

This is a guest post for the Computer Weekly Developer Network blog written by Brian Gentile, senior VP & general manager for TIBCO Analytics.

The revolution cometh


We're in the middle of a self-service revolution across all aspects of our lives, but... fortunately... those distinctly user-unfriendly self-service systems that pervade in our supermarkets are being countered by an altogether more positive type of self-service development.

Applied in the right sphere and powered by innovative technology, self-service techniques have quite simply come into their own and are now able to achieve significant business benefits while having a truly transformative impact on how we work.

BI plugs in, turns on

Independence, accessibility and greater productivity all conspire to achieve a far more efficient proposition and nowhere is this better evidenced than by the evolution of business intelligence (BI) in the workplace.

As information comes in thick and fast from multiple sources, so have the demands for an alternative to the stand-alone, costly and complex BI reporting and analytics tools of old.

Self-service analytics

In short, knowledge workers need a more agile and available approach and are increasingly doing it for themselves by embracing self-service analytics techniques.

Further, progress has snowballed by weaving and embedding a new breed of data analytics inside nearly any application, so they are seamlessly integrated into our daily business processes. Driven in part by the explosion of the cloud, embedded reporting and analytics have changed the way that business intelligence tools are accessed, all of which has made analytics cheaper, more accessible, and straightforward.

And, for the first time, we are seeing the benefits of having insightful data at everyone's fingertips, enhancing performance and informing key decisions with timely insight, which in turn can only create a more self-sufficient and informed workforce.

Democratized analytics

Business analytics traditionally managed by IT specialists at larger enterprises are now accessible to a far wider audience; for example, at smaller businesses, where cost and complexity would have previously proven to be prohibitive.


Consequently, we are seeing a shift: analytic applications that were purely the responsibility of IT are increasingly shifting to business functions, which must take the lead to ensure their needs are best accommodated.

Crucially, the latest analytics software not only offers the mechanism, but the direction to steer users towards the most appropriate options. It's part of our ongoing commitment to build and maintain a broader analytics dialogue rather than simply providing the tools -- an approach that will drive much broader consumption, ultimately reaching everyone with the right amount of analytic insight to make an improved decision and drive a superior outcome.


By entirely reimagining business analytics, we will enable anyone to have a personalised or tailored experience that is exactly fit for purpose. It's an exciting vision and when we discuss this directly with customers, it never fails to capture their excitement, as well.

Our approach reflects the fluidity of a less-structured flow of data coming through the cloud with a commitment to continually evolve and improve the platforms. From boosting the embedding of reporting to creating a more powerful and visual use of data, an ongoing priority is to ensure that the data is as meaningful as possible.

It's a commitment to delivering accelerated insight for everyone, not just a chosen few. It's also a quest to take analytics out of the tool and put them into the conversations that are had each day within an organisation. That will be one big step closer to an ideal analytic experience.

Your suggestions and comments are welcome here so this analytic conversation can continue.

About TIBCO Analytics

At the heart of the firm's vision are two primary platforms, Spotfire and Jaspersoft, both designed to address the analytic problems that our customers face and to give them the answers they need through the best analytic recommendations.

Invstr: Facebook-style stock market trading app

bridgwatera | No Comments
| More

Deutsche Bank executive Kerim Derhalli quit his job -- the next thing he did was launch invstr.

1 oi2gud2d.JPG

Short on vowels, big on the markets?

This mobile application is designed to give 'novice traders' a chance to compare their market predictions with other stock market players and also share these thoughts on social sites such as Facebook and Twitter.

It's all about the 'the wisdom of the crowds', as they say.


The idea is that users can use the app to learn the fundamentals of trading without risking any real money.

Although a free download, users can pay for extra functionality options.

According to the iTunes promo text, whether you are an aspiring investor or a financial professional, invstr brings you all the financial information you need in a fun and easy-to-use app.

Users can explore invstr as a guest or join the community to access crowd-sourced predictions, free and on demand live market data, high quality news and research reports etc.

invstr features a crowdsourced game that predicts the prices of individual stocks, bonds, currencies and commodities, enabling investors to weigh the strength of market convictions immediately when making real-world investment decisions.

"We created invstr to give even the most casual investors fast, mobile access to in-depth market data, news and analysis," said invstr CEO Kerim Derhalli, adding, "now we're expanding invstr into a true financial social network by streamlining the app, making it simple to take part in financial conversations."

This month a 'major' iOS app update also combines a new 'future' chart allowing users to compare their predictions to other users and the ability to 'analyse' detailed charts, news, calendar information and discussions for any financial instrument.

What to expect from Nutanix .NEXT 2015

bridgwatera | No Comments
| More

Nutanix is the 'web-scale converged infrastructure' company, or at least that's what the firm uses as its opening gambit.

The company now hosts its .NEXT user conference in the US -- the event runs from Monday June 8 in Miami, Florida... so what should we expect?

Firstly, .NEXT is truly a "user" conference insists the firm.

Most technical sessions feature experienced IT professionals from enterprises who will share what works (and what doesn't work) in their datacentres.


What does Nutanix do?

The firm has developed a hyperconverged solution intended to simplify the creation of enterprise datacentre infrastructures by integrating server and storage resources into a turnkey platform.

As we have said before on the CWDN blog, basically this technology makes building clouds and datacentre resources a whole lot easier.

The industry momentum toward building web-scale datacenters has further validated our Nutanix vision, says the company.

According to Nutanix, now is the perfect time to bring together the community of passionate builders and shapers of this historic technology shift.

So what of the conference?

Founder and CEO Dheeraj Pandey insists that this event is designed for those of us who believe in melding web-scale engineering and consumer-grade design to build beautiful and scalable datacentres.

(Ed -- that's all of us, isn't it?)

"We promise an experience that will be nothing short of transformational. Prepare to learn, share, and unite in a common vision of building software-defined infrastructure that will be a joy to interact with and administer," said Pandey.

Hands on Labs

For the coders and engineers, the firm has provided hands on labs to learn about Nutanix capabilities users can implement in your environment.

nuExperience Lab

The company asks... are you passionate about consumer-grade management for enterprise IT? Let us show you the latest from our development labs, and provide feedback directly to our engineers. This is your chance to help shape the user experience for the next generation of datacentre infrastructure.


Image caption: Fontainebleau Miami Beach, slightly upmarket from the local 'Econolodge' we think.

The event website is found here.

Cowboy 'wranglers' & (big) data preparation

bridgwatera | No Comments
| More

So let's get this straight from the start; you enjoy tracking the rise of big data and the analytics that we now impress upon it to derive new insights in everything from retail to the Internet of Things - but you're not familiar with the term data preparation?

It's a crying shame, but this piece of terminology does not get the kudos it deserves.

Data preparation is sometimes called data pre-processing, still no clues?

It is the manipulation and transformation of data, from its raw core, to into a form suitable for analysis and processing.

Closely connected to (and often found within) the field of data mining, data preparation happens because its processes CAN NOT be completely automated - hence, it's very existence.

"The key ingredient of data preparation platforms is their ability to provide self-service capabilities that allow knowledgeable users, who are not IT experts, to combine, transform and cleanse relevant data prior to analysis," said Philip Howard, research director for data management at Bloor Research.


Howard explains that data preparation is provided by a field of vendors that includes veterans and relatively new start-ups - and that a company called Paxata has attained the 'champion' position today.

Pataxta itself offers a purpose-built Adaptive Data Preparation application and platform.

The four kinds of big data tools

1. Tools designed to be used by end users (such as dashboards).
2. Tools for data scientists and developers (such as big data analytics engines)
3. Tools for big data orchestration and management (such as those used by DBAs)
4. Tools for data 'wrangling' (such as data preparation tools)

NOTE: Wrangling here is meant in the cowboy horse-handling sense.

Paxata was developed from the ground up to be an enterprise-class data preparation tool set and is currently being used by over 45 on-premise and cloud customers with stringent data quality and security requirements.

For further clarification:

• Adaptive, self-service data preparation solutions simplify, automate and reduce the manual steps of getting the data into a useable form. This is accomplished without risking loss of control over who uses the data, for what analytics, and how users prepare it for their own consumption.

• Self-service data preparation toolsets enable analysts within the business to collaborate and dynamically govern the data integration, data quality and enrichment processes at scale from their Hadoop-based data lake store.

• Self-service data preparation solutions can also offer a data library, which is a secure environment where business analysts and IT can share data sets with the business, as well as become the one-stop shop for all completed and in-process data prep projects.

What to expect from the Asigra Cloud Backup Partner Summit 2015

bridgwatera | No Comments
| More

If you hadn't guessed it already, June is the last push in the annual tech calendar's conference season before the summer slowdown.

Among those hosting the 'party faithful' during the warmer months is Asigra, a name you will know if you have looked at the 'increasingly preferred' data protection option of cloud backup.


Canadian, eh?

The company itself is headquartered in Toronto and so this year hosts its Asigra Cloud Backup Partner Summit 2015 in the city.

Today Asigra says that many firms have already made the move (or are carefully considering) moving "data protection" to a private, public, or hybrid cloud i.e. don't just think cloud for SaaS and platform...

... but consider cloud resources as your go-to backup route solution -- or, at least, that's the Asigra theory anyway.

Asigra's Cloud Backup architecture combines a cloud-optimised scale-out architecture, a cloud backup and recovery software platform and a cloud API and management system to manage, scale and deliver data protection services.

The firm's cloud backup technology is described as a 'end-to-end' solution with built-in mobility support -- Asigra insists that it meets security standards and aligns the value of data with the costs of protecting it.

Don't look down

There's a welcome reception with local Toronto craft brews at the CN Tower, one of the tallest buildings in the world reaching 553.33 meters in height (that's 1,815.4 ft).

The firm anticipates 200+ cloud backup professionals (they're members of the Asigra partner ecosystem) to attend and share best practices and experiences ranging from go to market strategies, sales compensation plans, infrastructure best practices and onward to strategies for efficiently onboarding new customers.

The main message here will centre on new revenue generating business opportunities for Asigra partners; and what's next for Asigra Cloud Backup.


In total it is a 2 ½ day event that includes three tracks, 36 sessions, 10 keynotes, 1 hands-on lab and an Asigra Genius Bar.

Asigra today claims to have more than 1,000,000 installations -- plus we should note that Asigra V13 was just released at the beginning of this year with support for MS Office 365, Docker Containers, VM Replication etc.

According to an official statement, this summit sets the stage for two days of networking with like minded innovative IT professionals who are looking for ways to generate more monthly recurring revenue, gain market share and edge out the competition.

Partner commitment, executed openly

"Asigra is so committed to the success of its partners that it offers them an opportunity to network with their peers, and get a first hand glimpse into product enhancements, the direction of the company, the product roadmap and what marketing and sales tools will be available to partners at no cost," said the firm.

Asigra partners are specialists in cloud backup and recovery who work with organisations to manage their entire backup needs, or partially manage depending on how the customer wants to work with.

Partners can protect data in the datacentre on physical or virtual servers, enterprise databases and applications; and beyond the datacentre on desktops, laptops, smartphones, tablets, in SaaS-based applications like Microsoft Office 365, Google Apps, Salesforce.com and in IaaS-based platforms like AWS or Microsoft Azure.


CWDN opinion: This looks like an interesting event. It's not a behemoth vendor 5,000 attendee megaladon, it is a more focused partner centric event with workshops to analyse real world implementation issues. What is often most interesting about these kinds of summits is how the CEO presents himself or herself i.e. irrespective of company size and industry specialism, we can expect CEO David Farajun to talk in big terms.

Food: expect something out of the ordinary? Canadians love poutine, beaver tails, maple syrup and Tim Hortons... at least that's what we've been told!

There is zero conference registration fee.

Subscribe to blog feed

Find recent content on the main index or look in the archives to find all content.