Car ownership - a dying thing?

Clive Longbottom | No Comments
| More

At a recent BMC event, CEO and Chairman Bob Beauchamp stood on stage and gave a view on how the rise of the autonomous car could result in major changes in many different areas.

The argument went something along these lines - as individuals start to use autonomous cars, they see less value in the vehicle itself.  The "driving experience" disappears, and the vehicle is seen far more as a tool than a desirable object.  By using autonomous vehicles, congestion can be avoided, both through the vehicles adapting to driving conditions, accidents being avoided, areas where non-autonomous vehicles are causing problems being by-passed and so on. The experience becomes an analogue to SDN - the car's function can be seen as the data plane (it gets from point A to point B) is decided by a set of commands (control plane deciding what should happen) through commands issues by the management plane (what is the best way to get from point A to point B?).

It is then seen that the tool is not being used that much - for long periods of time, it is in the garage, drive or roadway doing nothing.  It needs to be insured; needs to be maintained - it becomes an issue, rather than a "must have".

Far better to just rent a vehicle as and when you need it - a "car as a service" approach means that you don't need to maintain the vehicle.  Insurance is a moot point - you aren't driving the vehicle anyway; it is the multiple computer "brains" that are doing so, working a full 360 degrees at computer speed, never getting tired; never failing to notice and extrapolate events going on around them.  Insurance is cheaper and only has to cover damage caused by e.g. vandalism and fire: theft is out, as the vehicle is autonomous anyway and can be tied in to a central controller.

Insurance companies struggle; car manufacturers have to move away from marketing based on seeing fast cars driving on deserted roads to selling to large centralised fleet managers who are only interested in overall lifetime cost of ownership.  Houses can change - no need for a garage or a drive and cities can change with less need of parking spaces.  More living space can be put in the same area - or more properties on the same plot of land. Autonomous driving means less time spent commuting; less frustration; less fuel being used up in stop-start traffic.

When Bob first said this, my immediate response was "it will never happen".  I like my car; I like the sense of personal ownership and the driving experience that I get - on an open road.

However, I then took more of an outside view of it.  Already, I have friends in large cities such as London who do not own a car.  They use public transport for a lot of their day-to-day needs, and where they need a vehicle, they hire one for a short period of time.  Whereas this may have been on a daily basis via Hertz or Avis in the past, newer companies such as City Car Club allow you rent a vehicle by the hour and pick it up from a designated parking bay close to you and drop it off in the same way wherever you want.  The rise of Uber as a callable taxicab company is also showing how more people want the ease of using a car, but not in owning the vehicle themselves.  These friends have no requirement for a flashy car badge or for the capability to get in "their" car and drive it at any time - in fact, the majority do not like driving at all, and would jump at the chance of using an autonomous vehicle, so removing this last issue for them.

As tech companies like Google improve their autonomous vehicles on a rapid basis, manufacturers such as Mercedes Benz, Ford and GM are having to respond.  Already, over fifty 500 tonne Caterpillar and Komatsu trucks are being used in Australia to move mining material, running truly autonomously in convoys across private roads in the outback, allowing 24x7 operations with lower safety issues. 

Just as the car manufacturers are coming out of a very bad period, they now stand a chance of being hit by new players in the market.  Elon Musk, of Tesla electric car fame, is a strong proponent of autonomous vehicles.  Amazon would like to take on Google, and it is likely that other high-tech companies will look to the Far East for help in building simple vehicles that can be used in urban situations via a central subscription model.

Sure, such a move to a predominantly autonomous vehicle model will take some time.  There will be dinosaurs such as myself who will fight to maintain ownership of a car that has to be manually driven.  There will be the need to show that the vehicle is truly autonomous; that it does not require continuous connectivity to a network to maintain a safe environment.  More companies such as City Car Club will need to be brought about, and suitable long-term business and technology models put in place to manage large car fleets and get them to customers rapidly and effectively without a need for massive acreage of space to store cars not being used.  Superfast recharging systems need to be more commonplace; these vehicles need to be able to recharge in minutes rather than hours, or to use replaceable battery packs.

Certainly, moving to the use of autonomous electronic vehicles where overall utilisation rates can be pushed above 60% would result in far less congestion in city centres and so in less pollution, less impact on citizens' health and less time wasted in the morning and evening rush hours. Indeed, Helsinki has set itself a target of zero private car ownership by 2025.

At the current rate of innovation and improvement in autonomous vehicles, it is becoming more of a "when" than an "if" as to when we will see a major change in car ownership.  The impact on existing companies involved in the car industry cannot be underestimated.  The need for improved technology and for technology vendors to work together to ensure that an autonomous future can and will happen is showing signs of being met.

The problem of buggy software components

Bob Tarzey | No Comments
| More

What do Heartbleed, Shellshock and Poodle all have in common? Well apart from being software vulnerabilities discovered in 2014, they were all found in pre-built software components, used by developers to speed-up the development of their own bespoke programs. Heartbleed was in OpenSSL (an open source toolkit for implementing secure access to web sites), Shellshock was in the UNIX Bash shell (which enables the running of UNIX operating system commands from programs), whilst Poodle was another SSL vulnerability.

 

Also common to all three is that they were given fancy names and well publicised. This is not a bad thing; it gives the press something to hang its hat on and gets the message out to software developers that a bug needs fixing. The time lag between zero day, when a vulnerability is first identified, and the bug being patched is the window of opportunity for hackers to exploit it. With Heartbleed in particular, there was also advice for the general public, to change their passwords for certain web sites that used the vulnerable version of OpenSSL.

 

However, these widely publicised bugs are just the tip of the iceberg, as data from HP's Security Research (HPSR) team reveals. HPSR uncovers software security flaws on behalf of its customers and the boarder community. Unlike the discoverers of Heartbleed, Shellshock and Poodle, HPSR does not seek publicity for all the flaws it hunts down via its Zero Day Initiative (ZDI) programme; not least because there are so many of them.

 

HPSR has a number of ways of seeking vulnerabilities out. Some it simply buys from white hat hackers (those who look for ways to hack software code, but not to exploit the flaws they find). It also sponsors an annual competition to find flaws called Pwn2Own; the 2014 event uncovered 33 in software from Adobe, Apple, Google, Microsoft and Mozilla. On top of this HPSR does its own research. In total in 2014 ZDI has uncovered over 500 bugs, two thirds of which have been patched, it estimate 50-75% of these were in software components. HPSR claims ZDI is the number one finder of bugs in deployed versions of Microsoft software.

 

As an HPSR rep points out 'these days most software is composed not written', meaning that software is largely built from pre-constructed components. In fact, not using components would be highly inefficient, as it would mean constantly re-inventing the wheel, especially when many components are cheap or free via open source. However, the number of bugs in software components means that users need more effective ways to monitor their use and fix problems that arise. This is especially true of open source components, as anyone can contribute to them. HPSR contends that commercial software vendors could strengthen the open source movement by investing more resources to ensure open source components are well-tested and secure.

 

Of course, the broader HP has an interest in all this for two reasons. First, as a builder and supplier of software, HP is a big user of components. Second, it also helps its customers build and deploy safer software through its Fortify product range. In February 2014 HP announced its Fortify Open Review Project to identify and report on security vulnerabilities in widely used open-source software components. HP also announced improved component checking support for its on-demand scanning service by partnering with Sonatype to use its Component Lifecycle Management analysis technology.

 

HP is not alone in recognising the need for safer component use. Veracode, another software security vendor, estimates that components constitute up to 90% of the code in some in-house developed applications. In September 2014 Veracode added a 'software composition analysis' into its static software scanning service to protect customers more rapidly from zero day vulnerabilities discovered in components.

With the introduction of software composition analysis Veracode can now create an inventory of all the components used by a given customer, detailing the programs in which each is embedded. When a new vulnerability is identified in a component, Veracode can take rapid and pervasive action; either applying fixes immediately or isolating already deployed applications until patches are available.

 

This further enhances its ability to protect customers from newly discovered vulnerabilities. Its dynamic scanning service, which tests deployed executables, would pick many of these up too. However, it focusses on common paths through applications and may miss obscure parts that are rarely or never used, but a hacker may focus exactly on these areas once a vulnerability becomes public knowledge.

 

As Veracode points out, most IT departments are managing software code that was largely not built in-house. The only control, security teams have over software is to maintain effective scanning capabilities with an awareness of components to help understand inherited risk. Software components are not going to disappear; their value to business is too great, security teams need to learn how to live with them.

Google Glass - seeing is believing

Bob Tarzey | No Comments
| More

I must admit to being sceptical about the whole 'wearables' thing. However, I was intrigued at recent Google event to be given an opportunity to try out a pair of Google Glass glasses. Glasses have been part of my life for as long as I can remember and here-in lay a problem. Google Glass assumes reasonable distance vision, so if you already wear glasses to correct for this, then the only way to try out Google's device proved to be wearing them on top of your normal specs. Still, it was only a demo, so style could be set aside!

The Google Glass equivalent of a screen is a translucent rectangle hanging in the upper right of your vision (think of walking down a street and reading a hanging pub sign). You might not want to read a book or watch a movie using such a display, but it was obvious it would be great for following directions or displaying information about museum exhibits or landscapes.

Apparently you can control the Google Glass menu by jolting your head, however, I did not master this. It conjures a future of people walking along the street making involuntary head movements (I suppose we have got used to the idea that people who are seemingly talking to themselves are no longer all mad, but usually using a Bluetooth mobile phone mic). You can also control Google Glass by swiping the arm of the glasses with your finger or by talking to them with certain prefaced commands.

So, if you have perfect 20/20 vision and are prepared to enter the bespectacled world to take advantage of Google Glass, what style choice do you have? You can choose from five different frames from the designer Net-a-Porter, which is not quite the range you might have in the local opticians, but it's a start. And, if you need your long term vision correcting, you can have prescription lenses fitted (the lenses are nothing to do with the device; indeed, you can wear them lens-free with just the frame).

In fact as the Google rep demoing the device pointed out, Google Glass is little more than a face mounted smartphone. So, when it comes to IT security the considerations are pretty much the same as for any personal device. Data can be stored and the internet accessed on Google Glass and therefore, in certain circumstances, their use may need to be controlled. You could argue that taking pictures or making videos would be more surreptitious with Google Glass than a standard smartphone, however, stylish as Google has tried to make its specs, it would still be pretty obvious you were wearing them, unless efforts have been made to conceal them with a hat or veil.

Privacy objections seem more likely. Google Glass and similar devices, that will surely follow if the form-factor takes off, may revolutionise certain job roles. Employees working in warehouses, hospitals or inspecting infrastructure in the field may really benefit from being able to see and record their activity whilst having both hands free. However, an employer with constant insight into what an employee is doing and seeing may be too much for some regulators. Time will tell.

Cloud & mobile security - take aim, save the data

Rob Bamforth | 1 Comment
| More

In all the hubbub around mobile users increasingly making their own choices of operating systems and hardware, something has been lost sight of - it doesn't really matter if you bring your own device (BYOD), a more pressing matter for businesses should be 'where is our data accessed?' (WODA).


This issue extends beyond the choice of the mobile endpoint as increasingly 'mobile' doesn't simply mean a single mobile touchscreen tablet alternative to a fixed desktop PC, but multiple points or modes of access with users flitting between them to use whichever is most appropriate (or to hand) at any moment in time. What has become mobile is the point of access to the business process, not just the hardware.


This multiplicity of points of mobile access - some corporate owned, some not - means that when IT services are required on the move they are often best delivered 'as a service' from the network, so it is no wonder that the growth in acceptance of cloud seems to have symbiotically mirrored the growth of mobile.


Both pose a similar challenge to the embattled IT manager. A significant element of control has been taken away - essentially the steady operating platform 'rug' has been pulled from under their feet.


So how do they retain some balance and control?


The first thing is to accept that things have changed. BYOD is more than a short-lived fad; most people have embraced their inner nerd and now have an opinion about what technology they like to use, and what they don't like. They buy it and use it as a fundamental part of their personal life from making social connections to paying utility bills. Most people are more productive if comfortable with familiar technology, so why force them to use something else?


However, enterprise data needs to be under enterprise control. Concerns about data are generally much higher than those surrounding applications and the devices themselves. This is a sensible, if accidental, prioritisation of how to deal with BYOD - first focus on corporate data. Unfortunately, few organisations have either a full document classification system or an approach to store mobile data in encrypted containers separated from the rest of the data and apps that will reside on BYO devices.


These are both worthy, if rarely reached at present, goals, but at least the first steps have been taken in recognising the problem. Organisations now need to understand their data a little better, and apply measured control of valuable data in the BYOD world - which doesn't look like diminishing any time soon.


In the core infrastructure, things have changed significantly too. Service provision has evolved from the convergence (or one could say, collision) of the IT industry with telecoms to deliver services on demand. IT might have been fragile with interoperability and resilience standards, but some of the positive side of telecoms has spilled over.   And eventually telecoms are starting to understand the power of supporting a portfolio of applications and that there is more to communications than voice. Cloud, or the delivery of elements of IT-as-a-service, is the active offspring of the coupling of IT and telecoms.


For businesses, struggling to do more IT with smaller budgets and fewer resources, the incremental outsourcing of some IT demands into the cloud makes sense.


However, cloud is still exhibiting some traits of the rebellious teenager. While there are some regions in Europe that appear more resistant to cloud (notably, Italy, Spain and to a lesser extent France), overall acceptance is positive, although this is across a mix of hybrid, private and public cloud approaches. There are also significant concerns about the location of data centres and the location of registration or ownership of cloud storage companies.


These are understandable in the light of recent revelations, but to enforce heavy security on all data 'just in case' would be excessive and counterproductive. Thankfully, most companies seem to realise this, and there is a pragmatic mix of opinions as to how to best store and secure data held in the cloud.


This needs to be an informed decision, however, and just as with mobile, all organisations need to be taking a more forensic approach to their digital assets. IT needs to work hand in hand with the business to identify those assets and data that are most precious, assess the vulnerability and apply appropriate controls, differentiated from other things that are neither valuable nor private as far as the organisation is concerned. The days of blanket approaches to data security are over.


For more information and recent research into cloud and mobile security, download this free Quocirca report, "Neither here nor there".


BMC - turnaround or more of the same?

Clive Longbottom | No Comments
| More

A little over a year ago, BMC was not looking good.  It had a portfolio of good, but tired technology and was failing to move with the times.  Internal problems at various levels in the company were leading to high levels of employee churn.  Things did not look good.

Led by CEO Bob Beauchamp, BMC was taken off the stock market and into private ownership. Investors were chosen based on their long term vision: what Beauchamp did not want was an approach of drive revenues and then cash in rapidly.

This has freed up BMC to take a new marketing approach.  New hires have been brought in.  The portfolio is being rationalised.  The focus is now on the user experience, with an understanding that mobility, hybrid private/public cloud systems and the business user are all important links in the new sales process. Substantially more money has been freed up to be invested in sales & marketing and research & development than was the case in its last year as a public company.

BMC's first new offering aimed to show an understanding of these issues was MyIT - an end-user self-service system that provides consumer-style front end systems with enterprise-grade back end capabilities.  MyIT has proved popular - and has galvanised BMC to take a similar approach across the rest of its product portfolio.

Help desk (or service desk as BMC prefers to call it) has been a mainstay of BMC over the years.  Its enterprise Remedy offering is the tool of choice in the Global 2000, but it was looking increasingly old-style in its over dependence on screens of text; was far too process-bound; and help desk agents and end users alike were beginning to question its overall efficacy in the light of new SaaS-based competition such as ServiceNow.  At its recent BMC Engage event in Orlando, BMC launched Remedy with Smart IT, a far more modern approach to service desk operation. Enabling better reach at the front end through mobile devices and better integration at the back end through to hybrid cloud services, Remedy with Smart IT offers a far more intuitive and usable experience than was previously available from BMC, available both as an on-premise and cloud-based offering.

BMC believes that it already has a strong counter-offer to ServiceNow in the mid-maturity market with its Remedyforce product (a service desk offering that runs on Salesforce's Salesforce1 cloud platform). The cloud-based version of Remedy with Smart IT, combined with MyIT will provide a much more complete offering with a better experience for users, service desk staff and IT alike across the total service desk market.

Workload automation is another major area for BMC.  Its Control-M suite of products has enabled automation of batch and other workloads from the mainframe through to distributed systems.  However, this has been a set of highly technical products requiring IT staff with technical and scripting skills.  Now, the aim is to enable greater usage by end users themselves, enabling business value to be more easily created.

All this is a journey for BMC - identifying and dealing with the needs of end users and how automation can help is something that is changing with the underlying platform.  For example, a hybrid platform requires more intelligence to identify where a workload should reside at any time (for example on private or public cloud), and the promise of cloud in breaking down monolithic applications to create the composite application built dynamically from required functions needs contextual knowledge of how the various functions can work together. 

This needs deep integration with BMC's products in its performance and availability group.  Being able to identify where problems are and dig down rapidly to root cause and remediate issues requires systems that can work with the service desk systems and with workload automation to ensure that business continuity is well managed.  Here BMC's TrueSight Operations Management provides probable cause analysis based on advanced pattern matching and analytics, enabling far more proactive approaches to be taken to running an IT environment.

TrueSight also offers further value in that it is moving from being an IT tool to a business one.  Through tying in the analytics capabilities of TrueSight into business processes and issues, dashboards can be created that show the direct business impact in cash terms for any existing or future problems, enabling the business to prioritise which issues should be focused on.

BMC has to work to deal with managing IT platforms both vertically at the stack level and horizontally at the hybrid cloud level.  It has taken a little time for BMC to move effectively from being a physical IT management systems vendor to a hybrid physical/virtual one; now, via its Cloud and Data Centre Automation team in BMC is positioning itself to provide systems to both end user and service provider organisations that are independent of any tie-in to hardware vendors, differentiating itself from the likes of IBM, HP and Dell (Dell is a long-term BMC partner anyway, although its acquisition of Quest and other management vendors has provided Dell with enough capability to go its own way should it so choose). At the same time, BMC still works closely with its data centre automation customers; it has recently published what it calls the Automation Passport, a best practices methodology for using automation to transform the business value of IT.

BMC still has a strong mainframe capability, which differentiates it from many of the new SaaS-based players.  Sure, not all organisations do have a mainframe, but the capability to manage the mainframe as a peer system within the overall IT platform means that those with one only have BMC, CA and IBM to look to for such an embracing management system.  IBM's strength is in its high-touch capacity of putting together a system once it is on the customer's site.  BMC and CA have both been moving towards simpler messaging and portfolios, along with providing on-premise and cloud based systems to give customers greater flexibility in how they deal with their IT platforms.

Overall, BMC seems to be turning itself around.  The lack of financially-driven quarterly targets has freed up Beauchamp and his team to take a far more strategic view of where the company needs to go.  Product sales volumes are up, and customer satisfaction is solid. However, BMC has to continue with a suitable speed along this new journey - and also has to ensure that it gets its message out there far more forcibly than it is doing at the moment.







Quocirca - security vendor to watch - Pwnie Express

Bob Tarzey | No Comments
| More

Branches are where the rubber still hits the road for many organisations; where retailers still do most of their selling, where much banking is still carried out and where health care is often dispensed. However, for IT managers, branches are outliers, where rogue activity is hard to curb; this means branches can become security and compliance black spots.

 

Branch employees may see fit to make their lives easier by informally adding to the local IT infrastructure, for example installing wireless access points purchased from the computer store next door. Whilst such activity could also happen at HQ, controls are likely to be more rigorous. What is needed is an ability to extend such controls to branches, monitoring network activity, scanning for security issues and detecting non-compliant activity before it has an impact.

 

A proposition from Boston, USA-based vendor Pwnie Express should improve branch network and security visibility. Founded in 2010, Pwnie Express has so far received $5.1 million Series-A venture capital financing from Fairhaven Capital and the Vermont Seed Capital Fund. The name is a play on both Pony Express, the 19th century US mail system and the Pwnie Awards, a competition run each year at the Black Hat conference to recognise the best discoverers of exploitable software bugs.

 

Pwnie Express's core offering is to monitor IT activity in branches through the installation of plug-and-play in-branch network sensor hardware. These enable branch-level vulnerability management, asset discovery and penetration testing. As such the sensors can also scan for wireless access points, which may have been installed by branch employees for convenience or even by a malicious outsider, and monitor the use of employee/visitor-owned personal devices.

 

To date Pwnie monitoring has been on a one-to-one basis and so hard to scale. That has changed with the release of a new software-as-a-service (SaaS) based management platform called Pwn Pulse. This increases the number of locations that can be covered from a single console, allowing HQ-based IT management teams to extend full security testing to branches. Pwn Pulse also improves backend integration to other security management tools and security information and event management (SIEM) systems improving an organisation's overall understanding its IT security and compliance issues.

 

Currently 25 percent of Pwnie Express's sales are via an expanding European reseller network, mainly in the UK. With data protection laws only likely to tighten in Europe in the coming years, Pwnie Express should provide visibility into the remote locations other security tools simply cannot reach.

Do I Concur with the SAP deal?

Clive Longbottom | No Comments
| More

SAP's recent $8.3b deal to acquire on-line travel and expense management vendor Concur can be read a few ways.  The first, and most positive one, is that it shows that SAP is continuing to try and broaden its appeal, diversifying from being "the ERP company".

Another view is that SAP has had a few bites at the cloud cherry and mostly failed.  Concur brings a massive cloud infrastructure with it, and SAP can make use of this in other ways.

A third, less charitable view is just that SAP has a large amount of money that it needs to be seen to do something with - and Concur was around at the right time and place.

Which one is most likely?  I would plump for diversification, with a bit of cloud thrown in.  SAP acquired SaaS-based human capital management vendor, SuccessFactors, in 2012.  It can be argued that Concur fits quite nicely into this vein - both are SaaS; both deal with managing employees.  This takes SAP from being the ERP solution for a few to a provider of functions for everyone; becoming a far stickier and embedded supplier that is even harder for an organisation to extricate itself from.

However, such a simplistic view hides many problems that could now face SAP as it integrates Concur.  Travel and expense management is complexity that only a few software vendors have managed to deal with.  It is not a simple replacement for employees using Excel spreadsheets to log their expenses - but requires deep domain expertise in areas such as multi-national tax laws, per diem rules, how travel management companies (TMCs) operate, how to interact with financial institutions on a broad scale to manage company and personal credit cards in a secure and effective manner and so on. Concur understands this in spades - but what impact will SAP have on this?

Sure, SAP understand the first part of this: ERP has had to deal with multi-national currencies and tax laws for some time.  The rest, though, is new territory for SAP.

Not only are the basics of expense management a difficult area, but Concur has been pushing the boundaries of what it does.  In the US, it has various deals for example with integrated taxi cab expense management, where the employee uses their mobile phones to identify a nearby cab and hail it electronically, and then pay the cab driver via the phone with the expense directly integrated into expense claims.  Other ongoing work has been looking at how travellers can have their whole trip automated from booking through travel and stay with capabilities such as the use of near field communication (NFC) as a means of booking into hotels without a need to go to the check in desk, and for mobile phones to act as electronic keys to unlock the hotel room door.  Such work requires a certain mindset and understanding of the travel and entertainment expense world - and the investment of large amounts of money.

Also, with Concur's 2011 acquisition of travel details management vendor TripIt, SAP finds itself with a more consumer-oriented product: taking it well out of its comfort zone.

It leaves SAP with a couple of choices - the first is to pretty much leave Concur as a separate entity, trying to keep all its existing staff and domain expertise to continue focusing on what Concur has been calling "the perfect trip" experience.  SAP can provide Concur with the deeper pockets to continue work in achieving the perfect trip - but is SAP up to understanding this and achieving any pay back on such investment?

For customers, they now find themselves with the unfortunate impact of moving from dealing with a small but fleet of foot and interesting supplier, to a rather staid and enterprise-focused behemoth.  I believe that this will raise flags for many customers: those who have been dealing with Concur in the past (travel and expense management professionals) are unlikely to be the ones in a company who have been dealing with SAP, and many companies will have ruled out SAP for other functions such as ERP and CRM and have gone for others, such as Oracle or Microsoft. Dealing with SAP may then be seen as the thin end of the wedge, with rapacious SAP salespeople trying to usurp the incumbent ERP and CRM vendors.

As with most acquisitions, the SAP/Concur deal will raise worries in may existing customers' minds, and will open up opportunities for Concur's competitors.  As stated earlier, the market is not exactly flush with such companies that understand travel and expense management well and have software that addresses all requirements.  For companies such as KDS and Infor, the SAP/Concur deal must be seen as opening up opportunities.

For Concur's existing customers, I would advise caution.  The two companies' view of the world are not the same - watch to see how SAP manages the acquisition; watch how many staff start to move on from Concur to join its competitors.  If it becomes apparent that SAP is trying to force Concur to fit into the SAP mould, maybe it will be time to look elsewhere.

Ricoh's plans for transformation

Louella Fernandes | No Comments
| More

Ricoh recently held its first industry analyst summit in Tokyo. The event focused on communicating Ricoh's focus on its services-led business transformation through its 18th Mid-Term Plan.

Ricoh is in the midst of transformation, actively streamlining its company structure to accelerate growth across a number of markets. Like many traditional print hardware companies, it is shifting its focus to services. Its primary focus is on what it calls "workstyle innovation". Over the past few years, Ricoh has repositioned the company as a services-led organisation - and has greatly enhanced its marketing communications and web presence to shift perception of Ricoh as a company that can support a business' transformation in today's evolving and mobile workplace. Ricoh's services target is to gain 30% growth in revenue globally in 3 years. It plans to achieve this by enhancing its core business as well as expanding its presence in new markets.

Core business enhancement

Ricoh's core business revolves around office printing, where it has carved out a strong strategy around managed document services (MDS). This established approach has enabled enterprises to tackle the escalating costs associated with an unmanaged print infrastructure. Ricoh has extended this model to encompass all document-centric processes and is effectively increasing its presence in the market on a global basis. In Quocirca's recent review of the MPS landscape, it is positioned as a global market leader - testament to its global scale, unified service and delivery infrastructure and effective approach to business process automation.

Service expansion

Ricoh's 18th mid-term plan relates to five key business areas. Its primary business, the office business market, encompasses both hardware technology and services such as MDS, business process services (BPS), IT services and Visual Communication.  Ricoh also operates in the consumer market (as seen in its new THETA 360 camera, a range of projectors and an electronic white board product); the industrial business market (optic devices, thermal media and inkjet heads), commercial printing (production printers) and new business, which includes additive manufacturing.  Ricoh plans a full-scale entry into commercial printing and intends to expand its growth in the industrial market by 50% in the next three years.

Ricoh announced eight new service lines

  • Managed Document Services - leveraging Ricoh's 5 step adaptive model to help organisations optimise document-centric processes.
  • Production Printing Services - portfolio of integrated services to complement Ricoh's hardware and solution portfolio for in-house corporate printing or graphic arts and commercial Printing.
  • Business Process Services - streamlining business processes such as human resources, finance and accounting, and front office outsourcing services such as contact center services.
  • Application Services - Integration of applications such as insurance claims processing services
  • Sustainability Management Services - Services to reduce environmental impact such as electricity and paper for Ricoh and non-Ricoh devices.
  • Communication Services - Development, deployment and integration of unified communication solutions including communication/collaboration solutions (such as Video Conferencing, Interactive White Board, Digital Signage, Virtual Help Desk)
  • Workplace Services - Services to maximise efficiency of workplace and effectiveness of workforce, including optimised use of space, smart use of technology and automation of certain office functions.
  • IT Infrastructure Services - Consulting, designing, supplying and implementation of IT infrastructure as well as support and management of full IT Infrastructure by remote and on-site support.

Perhaps the most focus was given to Ricoh's IT services portfolio which varies by region. Ricoh has made a number of IT services acquisitions across several regions and is seeing strong success in Asia Pacific, Europe and the US.   In The US, the acquisition of MindSHIFT is enabling Ricoh to target small and medium sized businesses. If Ricoh can articulate a strong proposition around IT services, this could be a key differentiator to its traditional competitors over the coming year. However, Ricoh is now operating in a wider IT services market and perhaps its penetration will be limited to its existing customer base looking to extend existing MDS engagements to the IT infrastructure.

Innovation

Ricoh is working on a range of technologies around what it calls the infinite network (TIN) where all people and things will be connected all the time. This is Ricoh's view of the internet of things (IoT) and also embraces Ricoh's vision of the need to connect to a rapidly increasing set of sensors in the environment.

Ricoh R&D discussed a range of differentiated technology platforms which aim to address multiple markets, enabling the business units and operating companies to go to market with highly differentiated solutions for the office and for specific large verticals. This includes communication and collaboration, visual search and recognition, digital signage and hetero-integration photonics (optics and image processing).

Perhaps the most relevant to the print industry is its mobile visual search technology which provides an interactive dimension to the printed page. A simple snap of an image can provide access to digital content such as text, video, purchase options and social networks.  Ricoh has commercialised this through its Clickable Paper product. Based on digital layers, this enables consumers to hover their mobile phone over a magazine advert, for example, and it could generate video or a link to a web site. Ricoh demonstrated an example used by Mazda, which is using the technology in its brochures.

This technology promises to potentially breathe new life into print by connecting print to the digital world.  The market is rapidly evolving market and Ricoh is competing with a range of interactive print/ augmented reality vendors in this space. The only other printer vendor to offer something similar is HP, with its Aurasma technology, which has been available for a number of years.

Quocirca opinion

Ricoh, like its traditional print competitors, needs to drive a dramatic shift to a services business model - its long-term relevance depends on this. While Ricoh has developed a cohesive set of new service offerings, it already has developed a relatively mature set of business process services across areas such as e-invoicing, healthcare, loan applications and so on.  Quocirca believes that this should be a priority for Ricoh going forward with its services strategy.

Indeed, Ricoh has already made strong inroads with its MDS strategy. To drive deeper engagements with larger enterprises needs to further articulate a strong vision around business process automation. Ricoh faces strong competition from Lexmark and Xerox in this space.

Ricoh illustrated that it is innovating across a number of markets and this shows commitment to expanding its presence in non-core markets. Overall, Ricoh is taking the right direction to change perceptions of its brand and develop broader services capabilities.  Ricoh certainly has a broad array of services, but it is now competing in many new markets and should focus on building its credibility in a few core areas and partnering with best of breed providers in others.

Some of the less conventional products, such as Clickable Page, need to be positioned carefully, and Ricoh will need to either ensure that it moves with improvements in the technology and with the increasing use of wearable technology, and even fully understand when such ephemeral approaches have run their time and so pull out of providing any offerings in the space.

IBM - A new behemoth, or a wounded beast?

Clive Longbottom | No Comments
| More
IBM recently held its first major event for industry analysts since its announcement of the divestment of its x86 server and product line to Lenovo.  At the event, Tom Rosamilia (SVP IBM Systems and Technology Group) and Steve Mills (SVP, Software and Systems) and their teams provided an up-beat view of where IBM currently is and where it is going.

The discussion unsurprisingly centred around IBM's own Power 8 microprocessor technology, with forays into the mainframe and the need for cloud, analytics and mobile first viewpoints.  Storage was another area of discussion - with IBM having acquired Texas Memory Systems (TMS) a back in 2012, flash storage is pretty solidly on the roadmap.

Power 8 was presented as a major engine for Linux workloads -which it certainly is.  Speeds and feeds were bandied around to show how Power 8 wipes the floor with competitors' x86-based offerings and how the overall cost of ownership was considerably lower.  For service providers, this is fine: they are not particularly bothered as to the underlying technology provided that it does what is required at a suitably low cost.  With many platform as a service providers moving towards a Linux focus, Power 8 systems make a great deal of sense.  Indeed, when mixed with the open standards OpenStack cloud platform, the offer becomes even more compelling.

Here lies another issue.  IBM now has its own network of datacentres around the world since the acquisition of SoftLayer and is building more.  With SoftLayer, IBM has a high value cloud IaaS platform that can support OpenStack as a PaaS offering, surpassing the capabilities of plain OpenStack, and can then top this with increasing library of SaaS offerings to sit upon it.  This will undoubtedly put IBM into competition with some of the very service provider prospects it wants to sell Power 8 systems too - while pulling others (particularly in the old systems integrators camp) closer to it by enabling them to avoid the need to build their own platforms, providing them with a consistent and relatively simple stack instead.

However, from a software point of view, IBM has a cloud first mentality: any new software coming from IBM must be capable of running on its own and partner cloud systems.  Combined with a concomitant mobile focus, IBM is making a play to provide systems that can be accessed by any device through its own and its partner's clouds.  This should increase the amount of enterprise software available to anyone choosing to work with IBM and its partners.
As well as its SoftLayer offering, IBM is also providing a cloud-based version of its Jeopardy-winning system, Watson.  Watson provides a capability to offer fast and effective probability outcomes to those dealing with mixed data, and is already showing great promise in healthcare.  Watson as a Service should accelerate such capabilities in the market.

As well as the Power 8 message, IBM is pushing the mainframe as a Linux engine.  Sales of the mainframe continue to be strong, and the majority of sales now include Linux capabilities.  Although the mainframe is not an engine for everyone, it shows no sign of fading away, and will remain a core part of IBM's future.

On the storage side, IBM has released an advanced connection technology it calls Coherent Accelerator Processor Interface (CAPI).  Within the storage environment, CAPI can be used to make a flash-based storage array work as "slow memory", rather than "fast disk".  In the search for the fastest possible manner of dealing with data from persistent storage systems, companies such as Fusion-IO (now acquired by SANDisk) and, indeed IBM itself, brought in PCIe-based server-side storage.  However, these then need additional systems such as provided by Pernix Data in order to ensure that such dedicated storage does not become a point of failure in the overall system.  By intelligently bypassing large parts of a storage array's existing controller, CAPI can make a SAN array blazing fast - and CAPI in conjunction with 
Power 8 systems is looking like being a major differentiator in dealing with big data (particularly in speeding up Hadoop clusters) as well as in high performance computing (HPC) systems.

Overall, then a positive report on the "new", less x86, IBM?  I suppose so - with one major caveat.  When the announcement of the divestment of x86 systems to Lenovo was made, I assumed that this was for obvious reasons that IBM could not manufacture the systems to the same low cost base that Lenovo could.  Indeed, in discussions with Adalio Sanchez, who will transition from General Manager of System x at IBM to head up Lenovo's revamped server organisation, he expects to be able to drive considerable costs out through Lenovo's different approach and greater economy of scale in commodity components.  I did expect, however, that there would remain a strong strategic relationship between the two companies.

Although IBM will be a reseller for and provide ongoing support for Lenovo servers through its Global Business Services (GBS) arm, it will not have any say on design of systems.  Therefore, what was looking like a powerful possible capability through mixing Power 8 and x86 technologies through IBM's consolidated and converged PureFlex systems will not happen.  Sure - if you want x86, IBM can source it via Lenovo, but a fully integrated engineered converged system will not be there. One rider to this is that the PureData and some PureApp configurations will still include x86 chips - still provided and integrated directly by IBM.  However, these are far more "black box" designs - the user will not really know what chips are in there, and IBM can tune these any way it sees fit, as long as the end result works.

In the commercial end-user company space, this presents the problem - the prospect may have a lot of Linux workloads, for which Power 8 may be appealing. However, it is also likely that there will be a large number of Windows-based workloads as well.  Power 8 cannot deal with these, and so an x86 platform will be required.  With Dell, HP and others having engineered converged systems that can run Linux and Windows workloads, where is it more likely that commercial customers will place their money?

IBM has proven itself to be a fighter and pulled itself back from the brink of disaster in the 1990s.  Its current offerings are strong and it is likely to continue to do well.  Its weak spot is in the x86 space - it would do well to sit down and talk further with Lenovo as to how it can still have a strategic x86 play.

It's all about the platform

Clive Longbottom | No Comments
| More

Content sync and share systems are available from many players - Dropbox, Box, Microsoft OneDrive are just a few of the many for those who want to be able to access their files from anywhere via the cloud. 

However, the ubiquity of systems and the lack of adequate monetisation at the consumer level is making this a difficult market in which to make a profit.  Each of the vendors now has to make a better commercial play - and this may mean establishing product differentiators.

The first step has been to offer enterprise content sync and share (ECSS), where central administrators can control who has access while individuals can work cohesively as teams and groups.  Again, though, although the likes of Huddle and Accellion were early leaders  (and still differentiate by offering on-premise and hybrid systems), Box, Dropbox and Microsoft are all busy moving into the same space, and all that is happening is the base line of functionality is getting higher - differentiation is still difficult.

The trick is in making any ECSS tool completely central to how people work - making it a platform rather than a tool.  At a basic level, this means making any file activity operate via the ECSS system, rather than through the access device's file system.  File open and save actions must go directly via the cloud - this basic approach makes the tool far more central to the user.

However, this is also likely to commoditise rapidly.  Vendors still need to do more - and this then requires far more from the system.  For example, increasing intelligence around the content of files can enable greater functionality.  Indexing of documents allows full search - but this in itself is no better than many of the desktop search tools currently available.  Applying meta data to the files based on parsing the content starts to add real value - and makes each ECSS system different.

At the recent BoxWorks 2014 event in San Francisco, some pointers as to possible direction were indicated.  The basic workflow engine built in to Box is being improved.  Actions will now be taken based not only on workflow rules, but also on content. For example, data loss prevention (DLP) can be implemented via Box through checking the content of a document against rules and preventing it from being moved from one area to another or from being shared with other users who do not meet certain criteria.

Alerts can be set up - maybe a product invoice needs paying on a certain date, or a review needs to be carried out by a specific person by a certain date.  Based on the workflow engine, these events can be identified and processes triggered to enable further actions to be taken.

By controlling the meta data correctly, ECSS tools can start to move into being enterprise document (or information) management systems, and even towards full intellectual property management systems.  Maintaining versioning with unchangable creation and change date fields provides the capability to create full audit chains that are legally presentable should the need arise.  The meta data can be used to demonstrate compliance to the numerous information and information security laws and industry standards out the, such as HIPAA and ISO 27001 or 17799.  Through such means, the ECSS system becomes an enterprise investment, not just a file management tool.

To make the most of this, though, requires an active and intelligent partner channel.  The channel can bring in the required domain expertise, whether this be horizontally across areas such as data protection, or vertically in areas such as pharmaceutical, finance or oil and gas.  Box has pulled in partners such as Accenture and Tech Mahindra to help in these areas.

Partners need to be able to access the platform to add their own capabilities, though.  This requires a highly functional application programming interface (API) to be in place.  This is an area where Box has put in a great deal of work, and the API is the central means for enabling existing systems to interface and integrate into the Box environment and vice versa.

Box has a strong roadmap for adding extra capabilities.  It needs to get this out into the user domain as soon as possible in order to show prospective users how it will be differentiating itself from its immediate competitors of Dropbox, Microsoft and Google, whose marketing dollars far outstrip Box's.

Box is in the middle of that dangerous place for companies - up until now, it has been small enough to be very responsive to its customers' needs.  In the future, it needs to have a more controllable code base with fewer changes - which could make it appear to be slower in adapting to the market's needs.  By building out its platform and creating an abstracted layer in how its platform works through using its own APIs, Box can create an environment where the basic engine can be stabilised and extra functionality layered on top without impacting the core.  Through this, Box is setting up a disciplined approach for its own engineering to use its platform for innovation as it wants its partners and customers to do so. Provided it sticks to this path, it should be able to maintain a good level of responsiveness to market needs.

Box and its partners need to build more applications on top of the ECSS engine; to create a solid and active ecosystem of applications that are information-centric and have distinct enterprise value.  It also needs to make a better job of showing how well it integrates with existing content management systems, such as Documentum and OpenText to create even more value, democratising information management away from the few to the whole of an organisation and its consultants, contractors, suppliers and customers.

It is likely that over the next year, some of the existing file sync and share vendors will cease to exist as they fail to adapt to the movement of others in their markets.  Box has the capacity and capabilities to continue to be a major player in the market - it just needs to make sure that it focuses hard on the enterprise needs and plays the platform card well.







Time and place: related, or inextricably linked?

Clive Longbottom | No Comments
| More

High rainfall over the last week has led to flooding.  Last night, there was a large number of burglaries.  An escape of toxic gases this morning has led to emergency services requesting everyone to evacuate their premises.  There are billions of barrels of crude oil that can be recovered over the next decade.

Notice something missing with all of this?  They may all be factually correct; they all discuss time - and yet they are all pretty useless to the reader due to one small thing missing - the "where?" aspect.

For example, I live in Reading in the UK - if the flooding is taking place in Australia, it may be sad, but I do not need to take any steps myself to avoid the floods.  If the burglaries are close to my house, I may want to review my security measures.  Likewise, if there is a cloud of toxic gas coming my way, I may want to head for the hills.  An oil and gas company is not going to spend billions of dollars in digging holes in the hope of finding oil - they need to have a good idea of where to drill in the first place.

And the examples go on - retail stores figuring out where and when to build their next outlet; utility companies making sure that they do not dig through another company's services; organisations with complex supply chains needing to ensure that the right goods get to the right place at the right time; public sector bodies needing to pull together trends in needs across broad swathes of citizens across different areas of the country.

The need for accurate and granular geo-specific data that can add distinct value to existing data sets has never been higher. As the internet of everything (IoE) becomes more of a reality, this will only become more of a pressing issue.  The next major battle ground will be around the capability to overlay geo-specific data from fixed and mobile monitors and devices onto other data services, creating the views required by different groups of people in the organisation in order to add extra value.

I was discussing all of this with one of the major players in the geographic information systems (GIS) market, Esri.  Esri has spent many years and a lot of money in building up its skills in understanding how geographic data and other data sets need to work contextually together.  Through using a concept of layers, specific data can be applied as required to other data, whether this be internal data from the organisation's own applications, data sets supplied by Esri and its partners, or external data sets from other sources.

The problem for vendors such as Esri though is the simplistic perception of location awareness that there is in the market.  Vendors such as Esri and MapInfo along with content providers including the Ordnance Survey and Experian and others are perceived as purely mapping players - maybe as a Google Maps on steroids.  This minimises the actual value that can be obtained from the vendors - and stops many organisations from digging deeper as to what can be provided.

For example, the end result of a geolocation analysis may not be a visual graph at all.  Take the insurance industry, for example.  You provide them with your postcode, they pull in a load of other data sets looking at crime in your area, likelihood of flooding, number of claims already made by neighbours, possibility of fraud, and out pops a number - say, £100 for a low risk insurance prospect, £5,000 for a high risk.  Neither the insurance agent nor the customer has seen any map, yet everything was dependent on a full understanding of the geographical "fix" of the data point, and each layer of data only had that point in common.  Sure, time would also have been needed to be taken into account - this makes it two fixed points, which could be analysed to reach an informed and more accurate decision.

The key for the GIS players now is to move far more to being a big data play that can be seen by prospects as a more key part of their needs.  Esri seems to understand this - it has connectors and integrations into most other systems, such that other data sources can be easily used, but also so that other business intelligence front ends can be used if a customer so wishes.

So, what's the future for GIS?  In itself, probably just "more of the same".  As part of a big data/internet of everything approach, geographic data will be one of the major underpinnings for the majority of systems.  When combined with time, place helps to provide a fixed point of context around which variable data can be better understood.  It is likely that the GIS vendors will morph into being bundled with the big data analytic players - but as ones with requisite domain expertise in specific verticals.

As the saying goes, there is a time and place for everything: when it comes to a full big data analytics approach, for everything, there is a time and place. Or, as a colleague said, maybe the time for a better understanding of the importance of place is now.

It's in the net: The value of big data

Clive Longbottom | No Comments
| More
Talking with a journalist friend of mine a few weeks back, we got talking about how to possibly place some actual hard pounds and pence value on data.  It got me thinking - and this is my take on it.

A reasonable example would be the UK Premiership football/soccer league - understood by a large enough number of people to make the analogies useful (hopefully).

Let's just start with a single data point.  Manchester United is a football team.  This is pretty incontrovertible - but it has little value in itself.  We can add immediate other data points to it, such as that it is a Premiership team, its home ground is Old Trafford, its strip is red etc.

This starts to build more of a picture - but still has little value.

We can then start to add other data to start to create possible value.  Over the past 10 years, Manchester United has won the Premiership title 5 times.  It has won the FA cup once, the league cup 3 times and the FIFA World Club Cup once.  A pretty good track record, then.

After the retirement of its long-term manager, Sir Alex Ferguson in 2012, David Moyes took over for one season - and United did not fare well, as players struggled to come to terms with a new regime.  Moyes was sacked and former Ajax, Bayern Munich, Barcelona and Dutch national team coach Louis van Gaal took over. van Gaal is looking to make major changes to the team, both through transfers and in the way the players are managed and trained. The results of these transfers will probably be known when you read this piece. A firm hand on the tiller could start to steer United back to winning ways.

Forbes estimates that Manchester United's brand value is around $739m (having fallen from $837m, due to the fall in playing fortunes in the 2013/14 season).  Forbes also estimates the "team" value (based on equity plus debt values) at $2.8b.  This makes it the world's third most valuable soccer club, behind Real Madrid and Barcelona.  So - deep pockets, and a money making machine.

The club claims to have 659 million fans around the globe, has nearly 3 million followers on Twitter and 54 million likes on Facebook. Wow - lots of eyeballs and merchandising opportunities.

In its first quarter 2014 financial results, it announced that merchandise and licensing revenues were up by 13.8%; that sponsorship revenues were up by 62.6% and that broadcasting revenues were up by 40.9%.  This all led to the quarter's revenues being up by 29.1% overall at £98.5m with an EBITDA being up by 36.2%. A bad season doesn't seem to have hit it at the bottom line overall.

The owners of the club, the American Glazer family of Malcolm and his six children, gained control of the club by borrowing money through payment in kind deals through an external company.  However, many of the loans are guaranteed against Manchester United assets.  In 2012, the Glazers sold 10% of the overall shares in the club, followed by a further 5% after Malcolm Glazer's death in May 2014. Opportunities are there to buy in to the club through share ownership - and to build up a decent holding if wanted. A leveraged buy-out that is now being sold back to the markets: not so much of a risk now.

Now we are getting somewhere.  We've brought together data from all sorts of different environments that starts to build up a more meaningful picture.

As a supporter, we have some idea of new direction:  van Gaal has a good track record; he is strict and is likely to come down hard on players who felt that they could pay little attention to Moyes with an attitude of "Sir Alex didn't do it that way".  van Gaal is unlikely to see the 2014/15 season as a turnaround year - he has to prove to all concerned that United is back on track.

For investors, the poor 2013/14 season did have an impact - brand value is down, and overall playing revenues will be hit as United will not be playing in Europe this season.  The supporters have proven to be loyal, and merchandise is still selling well.  However, new sponsors are on board with long-term deals, and the overall books are still looking strong.  
Now - this has just been about Manchester United.  There are 19 other clubs in the Premiership, and the same analysis can be carried out against each one.  Further granularity can be added by analysing at the individual player level; at coaching team level; at commercial team level.  The findings can then be compared and contrasted to give indicators of how the clubs are likely to perform at a sports and a financial level.

This is how big data works - it brings together little bits of unconnected data and creates an overall story that has different values depending on how you look at it.

Does it result in something where you can say "this is worth this much"?  No - but then again, very little in life does allow for such certainty.  As long as sufficient data is pulled together from sufficient sources and is then analysed in the right way, it should be enough to say "this finding will give me a strong chance of greater value".

The worst thing that you can say to a United supporter is that football is "only a game".  Bill Shankly, a former manager of United's arch enemies, Liverpool FC, once said "Some people believe football is a matter of life and death, I am very disappointed with that attitude. I can assure you it is much, much more important than that."  

Football ceased to be just a game many years ago - it is now a major commercial business, where getting anything wrong can have major long-term impact on earning capabilities, and therefore club survival.  Shankly - speaking well before the use of big data analytics - may well have been right.

The security and visibility of critical national infrastructure: ViaSat's mega-SIEM

Bob Tarzey | No Comments
| More

There has been plenty of talk about the threat of cyber-attacks on critical national infrastructure (CNI). So what's the risk, what's involved in protecting CNI and why, to date, do attacks seem to have been limited?

 

CNI is the utility infrastructure that we all rely on day-to-day; national networks such as electricity grids, water supply systems and rail tracks. Others have an international aspect too, for example gas pipelines are often fed by cross-border suppliers. In the past such infrastructure has been often been owned by governments, but much has now been privatised.

 

Some CNI has never been in government hands, mobile phone and broadband networks have largely emerged after the telco monopolies were scrapped in the 1980s. The supply chains of major supermarkets have always been a private matter, but they are very reliant on road networks, an area of CNI still largely in government hands.

 

The working fabric of CNIs is always a network of some sort; pipes, copper wires, supply chains, rails, roads: keeping it all running requires network communications. Before the widespread use of the internet this was achieve through propriety, dedicated and largely isolated networks. Many of these are still in place. However, the problem is that they have increasingly become linked to and/or enriched by internet communications. This makes CNIs part of the nebulous thing we call cyber-space; which is predicted to grow further and faster with the rise of the internet-of-things (IoT).

 

Who would want to attack CNI? Perhaps terrorists, however, some point out that it is not really their modus operandi, regional power cuts being less spectacular that flying planes in to buildings. CNI could become a target in nation state conflicts, perhaps a surreptitious attack where there is no kinetic engagement (a euphemism for direct military conflict), some say this is already happening, for example, the Stuxnet malware that targeted Iranian nuclear facilities.

 

Then there is cybercrime. Poorly protected CNI devices may be used to gain entry to computer networks with more value to criminals. In some case devices could be recruited to botnets, again this is already thought to have happened with IoT devices. Others may be direct targets, for example tampering with electricity meters or stealing data from point-of-sales (PoS) devices that are the ultimate front end of many retail supply chains.

 

Who is ultimately responsible for CNI security? Should it be governments? After all, many of us own the homes we live in, but we expect government to run defence forces to protect our property from foreign invaders. Government also passes down security legislation, for example at airports and other mandates are emerging with regards to CNI. However, at the end of the day it is in the interests of CNI providers to protect their own networks, for commercial reasons as well as in the interests of security. So, what can be done?

 

Securing CNI

One answer is of course, CNI network isolation. However, this simply not practical, laying private communications networks is expensive and innovations like smart metering are only practical because existing communications technology standards and networks can be used. Of course, better security can be built into to CNIs in the first place, but this will take time, many have essential components that were installed decades ago.

 

A starting point would be better visibility of the overall network in the first place and ability to collect inputs from devices and record events occurring across CNI networks.  If this sounds like a kind of SIEM (security information and event management) system, along the lines of those provide for IT networks by LogRhythm, HP, McAfee, IBM and others, then that is because it is; a mega-SIEM for the huge scale of CNI networks. This is the vision behind ViaSat's Critical Infrastructure Protection. ViaSat is now extending sales of the service from USA to Europe.

The service involves installing monitors and sensors across CNI networks, setting base lines for known normal operations and looking for the absence of the usual and the presence of the unusual. ViaSat can manage the service for its customers out of its own security operations centre (SOC) or provide customers with their own management tools.  Sensors are interconnected across an encrypted, IP fabric, which allows for secure transmission of results and commands to and from the SOC. Where possible the CNI's own fabric is used for communications, but if necessary this can be supplemented with internet communications; in other words the internet can be recruited to help protect CNI as well as attack it.

Having better visibility of any network not only helps improve security, but enables other improvements to be made through better operational intelligence. ViaSat says it is already doing this for its customers. The story sounds similar to one told in a recent Quocirca research report, Masters of Machines that was sponsored by Splunk. Splunk's back ground is SIEM and IT operational intelligence, which, as the report shows, is increasingly being used to provide better commercial insight into IT driven business processes.

As it happens ViaSat already uses Splunk as a component of its SOC architecture. However, Splunk has ambitions in the CNI space too, some of it customers are already using its products to monitor and report on industrial systems. Some co-opetition will surely be good thing as the owners of CNIs seek to run and secure them better for the benefit of their customers and in the interests of national security.

Do increasing worries about insider threats mean it is time to take another look at DRM?

Bob Tarzey | No Comments
| More

The encryption vendor SafeNet publishes a Breach Level Index which records actual reported incidents of data loss. Whilst the number of losses attributed to malicious outsiders (58%) exceeds those attributed to malicious insiders (13%), SafeNet claims that insiders account for more than half of the actual information lost. This is because insiders will also be responsible for all the accidental losses that account for a further 26.5% of incidents and the stats do not take into account the fact that many breaches caused by insiders will go unreported. The insider threat is clearly something that organisations need to guard against to protect their secrets and regulated data.

Employees can be coached to avoid accidents and technology can support this. Intentional theft is harder to prevent, whether it is for reasons of personal gain, industrial espionage or just out of spite. According to Verizon's Data Breach Investigations Report, 70% of the thefts of data by insiders are committed within thirty days of an employee resigning from their job, suggesting they plan to a take data with them to their new employer. Malicious insiders will try to find a way around the barriers put in place to protect data; training may even serve to provide useful pointers about how to go about it.

Some existing security technologies have a role to play in protecting against the insider threat. Basic access controls built into data stores, linked to identity and access (IAM) management systems are a good starting point, encryption of stored data strengthens this helping to ensure only those with the necessary rights can access data in the first place. In addition, there have been many implementations of data loss prevention (DLP) systems in recent years; these monitor the movement of data over networks and alert when content is going somewhere it shouldn't and, if necessary, blocks it.

However, if a user has the rights to access data, and indeed to create it in the first place, then these systems do not help, especially if the user is to be trusted to use that data on remote devices. To protect data at all times controls must extend to wherever the data is. It is to this end that renewed interest is being taken in digital rights management (DRM). In the past issues such as scalability and user acceptance have held many organisations back from implementing DRM. That is something DRM suppliers such as Fasoo and Verdasys have sought to address.

DRM, as with DLP, requires all documents to be classified from the moment of creation and monitored throughout their life cycle. With DRM user actions are controlled through an online policy server, which is referred to each time a sensitive document is accessed. So, for example, a remote user can be prevented from taking actions on a given document such as copying or printing; documents can only be shared with other authorised users. Most importantly an audit trail of who has done what to a document, and when, is collected and managed at all stages.

Just trusting employees would be cheaper and easier than implementing more technology. However, it is clear that this is not a strategy businesses can move forward with. Even if they are prepared to take risk with their own intellectual property regulators will not accept a casual approach when it comes to sensitive personal and financial data. If your organisation cannot be sure what users are doing with its sensitive data at all times, perhaps it is time to take a look at DRM.

Quocirca's report "What keeps your CEO up at night? The insider threat: solved with DRM", is freely available here.

Top 10 characteristics of high performing MPS providers

Louella Fernandes | No Comments
| More

Quocirca's research reveals that almost half of enterprises plan to expand their use of managed print services (MPS). MPS has emerged as a proven approach to reducing operational costs and improving the efficiency and reliability of a business's print infrastructure at a time when in-house resources are increasingly stretched. 

Typically, the main reasons organisations turn to MPS are cost reduction, predictability of expenses and service reliability. However they may also benefit from implementation of solutions such as document workflow, mobility and business process automation, to boost collaboration and productivity among their workforce. MPS providers can also offer businesses added value through transformation initiatives that support revenue and profit growth. MPS providers include printer/copier manufacturers, systems integrators and managed IT service providers. As MPS evolves and companies increase their dependence on it, whatever a provider's background, it's important that they can demonstrate their credibility across a range of capabilities. The following are key criteria to consider when selecting an MPS provider:

  1. Strong focus on improving customer performance - In addition to helping customers improve the efficiency of their print infrastructure, leading MPS providers can help them drive transformation and increase employee productivity as well as supporting revenue growth.  An MPS provider should understand the customer's business and be able to advise them on solutions that can be implemented to improve business performance, extend capabilities and reach new markets.
  2. A broad portfolio of managed services - Many organisations may be using a variety of providers for their print and IT services. However managing multiple service providers can also be costly and complex. For maximum efficiency, look for a provider with a comprehensive suite of services which cover office and production printing, IT services and business process automation.  As businesses look more to 'as-a-service' options for software implementation,consider MPS providers with strong expertise across both on-premise and cloud delivery models.
  3. Consistent global service delivery with local support - Global delivery capabilities offer many advantages, including rapid implementation in new locations and the ability to effectively manage engagements across multiple countries. However it's also important that a provider has local resources with knowledge of the relevant regulatory and legal requirements. Check whether an MPS provider uses standard delivery processes across all locations and how multi-location teams are organised and collaborate.
  4. Proactive continuous improvement - An MPS provider must go beyond a break/fix model to offer proactive and pre-emptive support and maintenance. As well as simple device monitoring they should offer advanced analytics that can drive proactive support and provide visibility into areas for on-going improvement.
  5. Strong multivendor support - Most print infrastructures are heterogeneous environments comprising hardware and software from a variety of vendors, so MPS providers should have proven experience of working in multivendor environments. A true vendor-agnostic MPS provider should play the role of trusted technology advisor, helping an organisation select the technologies that best support their business needs. Independent MPS providers should also have partnerships with a range of leading vendors, giving them visibility of product roadmaps and emerging technologies.
  6. Flexibility - Businesses will always want to engage with MPS in a variety of different ways. Some may want to standardise on a single vendor's equipment and software, while others may prefer multivendor environments. Some may want a provider to take full control of their print infrastructure while others may only want to hand over certain elements. And some may want to mix new technology with existing systems so they can continue to leverage past investments. Leading MPS providers offer flexible services that are able to accommodate such specific requirements. Flexible procurement and financial options are also key, with pricing models designed to allow for changing needs.
  7. Accountability - Organisations are facing increased accountability demands from shareholders, regulators and other stakeholders. In turn, they are demanding greater accountability from their MPS providers. A key differentiator for leading MPS providers is ensuring strong governance of MPS contracts, and acting as a trusted, accountable advisor, making recommendations on the organisation's technology roadmap. MPS providers must be willing to meet performance guarantees through contractual SLAs, with financial penalties for underperformance. They should also understand the controls needed to meet increasingly complex regulatory requirements.
  8. Full service transparency - Consistent service delivery is built on consistent processes that employ a repeatable methodology. Look for access to secure, web-based service portals with dashboards that provide real-time service visibility and flexible reporting capabilities.
  9. Alignment with standards - An MPS provider should employ industry best practices, in particular aligning with the ITIL approach to IT service management. ITIL best practices encompass problem, incident, event, change, configuration, inventory, capacity and performance management as well as reporting.
  10. Innovation - Leading MPS providers demonstrate innovation. This may include implementing emerging technologies and new best practices and continually working to improve service delivery and reduce costs. Choose a partner with a proven track record of innovation. Do they have dedicated research centres or partnerships with leading technology players and research institutions? You should also consider how a prospective MPS provider can contribute to your own company's innovation and business transformation strategy. Bear in mind that innovation within any outsourcing contract may come at a premium - this is where gain-sharing models may be used.

Ultimately, businesses are looking for more than reliability and cost reduction from their MPS provider. Today they also want access to technologies that can increase productivity and collaboration and give them a competitive advantage as well as help with business transformation. By ensuring a provider demonstrates the key characteristics above before committing, organisations can make an informed choice and maximise the chances of a successful engagement. Read Quocirca's Managed Print Services Landscape, 2014







What is happening to the boring world of storage?

Clive Longbottom | No Comments
| More

Storage suddenly seems to have got interesting again.  As the interest moves from increasing spin speeds and applying ever more intelligent means to get a disk head over the right part of a disk in the fastest possible time to flash-based systems where completely different approaches can be taken, a feeding frenzy seems to be underway.  The big vendors are in high-acquisition mode, while the new kids on the block are mixing things up and keeping the incumbents on their toes.

After the acquisitions of Texas Memory Systems (TMS) by IBM, Whiptail by Cisco and XtremIO by EMC in 2012, it may have looked like it was time for a period of calm reflection and the full integration of what they had acquired.  However, EMC acquired ScaleIO and then super-stealth server-side flash company DSSD to help it create a more nuanced storage portfolio capable of dealing with multiple different workloads on the same basic storage architecture.

Pure Storage suddenly popped up and signed a cross-licensing and patent agreement with IBM, with Pure acquiring over 100 storage and related patents from IBM, with Pure stating that this was a defensive move to protect itself from any patent trolling by other companies (or shell companies).  However, it is also likely that IBM will gain some technology benefits from the cross-licensing deal.  At the same time as the IBM deal, Pure also acquired other patents to bolster its position.

SanDisk acquired Fusion-io, another server-side flash pioneer.  More of a strange acquisition, this one - Fusion-io would have been more of a fit for a storage array vendor looking to extend its reach into converged fabric through PCIe storage cards.  SanDisk will now have to forge much stronger links with motherboard vendors - or start to manufacture its own motherboards - to make this acquisition work well.  However, Western Digital had also been picking up flash vendors, such as Virident (itself a PCIe flash vendor), sTec and VeloBit; Seagate acquired the SSD and PCIe parts of Avago - maybe SanDisk wanted to be seen to be doing something.

Then we have Nutanix: a company that started off as marketing itself as a scale-out storage company but was actually far more of a converged infrastructure player.  It has just inked a global deal with Dell, where Dell will license Nutanix' web-scale software to run on Dell's own converged architecture systems.  This deal gives a massive boost to Nutanix: it gains access to the louder voice and greater reach of Dell, while still maintaining its independence in the market.

Violin Memory has not been sitting still either.  A company that has always had excellent technology based on moving away from the concept of the physical disk drive, it uses a PCI-X in-line memory module approach (which it calls VIMMs) to provide all-flash based storage arrays.  However, it did suffer from being a company with great hardware but little in the way of intelligent software.

After its IPO, it found that it needed a mass change in management staff, and under a new board and other senior management, Quocirca is seeing some massive changes in its approach to its product portfolio.  Firstly, Violin brought the Windows Flash Array (WFA) to market - far more of an appliance than a storage array.  Now, it has launched its Concerto storage management software as part of its 7000 all flash array.  Those who have already bought the 6000 array can choose to upgrade to a Concerto-managed system in-situ.

Violin has, however, decided that PCIe storage is not for it - it has sold off that part of its business to SK Hynix.

The last few months have been hectic in the storage space.  For buyers, it is a dangerous time - it is all too easy to find yourself with high-cost systems that are either superseded and unsupported all too quickly or where the original vendor is acquired or goes bust leaving you with a dead-end system.  There will also be continued evolution of systems to eke out those extra bits of performance, and a buyer now may not be able to deal with these changes through abstracting everything through a software defined storage (SDS) layer.

However, flash storage is here to stay.  At the moment, it is tempting to choose flash systems for specific workloads where you know that you will be replacing the systems within a relatively short period of time anyway. This is likely to be mission critical latency-dependent workloads where the next round of investment in the next generation of low-latency, high-performance storage can be made within 12-18 months. Server-side storage systems using PCIe cards should be regarded as highly niche for the moment: it will be interesting to see what EMC does with DSSD and what Western Digital and SanDisk do with their acquisitions, but for the moment, the lack of true abstraction of PCIe (apart from via software from the likes of PernixData).

For general storage, the main storage vendors will continue to move from all spinning to hybrid and then to all flash arrays over time - it is probably best to just follow the crowd here for the moment.

Cloud infrastructure services, find a niche or die?

Bob Tarzey | No Comments
| More

Back in May it was reported that Morgan Stanley had been appointed to explore options for the sale of hosted? services provider Rackspace. Business Week, May 16th reported the story with the headline Who Might Buy Rackspace? It's a Big List. 24/7 WALLST reported analysis from Credit Suisse that narrowed this to three potential suitors; Dell, Cisco and HP.

To cut a long story short, Rackspace sees a tough future competing with the big three in the utility cloud market; Amazon, Google and Microsoft. Rackspace could be attractive to Dell, Cisco, HP and other traditional IT infrastructure vendors, that see their core business being eroded by the cloud and need to build out their own offerings (as does IBM which has already made significant acquisitions).

Quocirca sees another question that needs addressing. If Rackspace, one of the most successful cloud service providers, sees the future as uncertain in the face of competition from the big three, then what of the myriad of smaller cloud infrastructure providers? For them the options are twofold.

Be acquired or go niche

First achieve enough market penetration to become an attractive acquisition target for the larger established vendors that want to bolster their cloud portfolios. As well as the IT infrastructure vendors this includes communications providers and system integrators.

Many have already been acquisitive in the cloud market. For example the US number three carrier CenturyLink buying Savvis, AppFog and Tier-3 and NTT's system integrator arm Dimension Data added to existing cloud services with OpSource and BlueFire. Other cloud service providers have merged to beef up their presence, for example Claranet and Star.

The second option for smaller provider is to establish a niche, where the big players will find it hard to compete. There are a number of cloud providers that are already doing quite well at this, they rely on a mix of geographic, application or industry specialisation. Here are some examples: 

Exponential E - highly integrated network and cloud services

Exponential-E's background is as a UK focussed virtual private network provider, using its own cross London metro-network and services from BT. In 2010 the vendor moved beyond networking to provide infrastructure-as-a-service. Its differentiator is to embed this into in to its own network services at network level 2 (switching etc.) rather than higher levels. Its customers get the security and performance that would be expected from internal WAN based deployments that cannot be achieved for cloud services accessed over the public internet.

City Lifeline - in finance latency matters

City Lifeline's data centre is shoe-horned in an old building near Moorgate in central London. Its value proposition is low latency for which it charges a premium over out of town premises for its proximity to the big city institutions.

Eduserve - governments like to know who they are dealing with

For reasons of compliance, ease of procurement and security of tenure, government departments in any country like to have some control over their suppliers, this includes the procurement of cloud services. Eduserv is a not for profit long-term supplier of consultancy and managed services to the UK government and charity organisations. In order to help its customers deliver better services, Eduserve has developed cloud infrastructure offerings out of its own data centre in the central south UK town of Swindon. As a UK G-Cloud partner it has achieved IL3 security accreditation enabling it to host official government data. Eduserve provides value added services to help customers migrate to cloud, including cloud adoption assessments, service designs and on-going support and management. 

Firehost - performance and security for payment processing

Considerable rigour needs to go into building applications for processing highly secure data for sectors such as financial services and healthcare. This rigour must also extend to the underlying platform. Firehost has built an IaaS platform to targets these markets. In the UK its infrastructure is co-located with Equinix, ensuring access to multiple high speed carrier connections. Within such facilities, Firehost applies its own cage level physical security. Whilst infrastructure is shared it maintains the feel a private cloud, with enhanced security through protected VMs with built in web application firewall, DDoS protection, IP reputation filtering and two factor authentication for admin access.

Even for these providers the big three do not disappear. In some cases their niche capability may simply see them bolted on to bigger deployments, for example, a retailer off-loading its payment application to a more secure environment. In other cases, existing providers are staring to offer enhanced services around the big three to extend in-house capability, for example UK hosting provider Attenda now offers services around Amazon Web Services (AWS).

For many IT service providers the growing dominance of the big three cloud infrastructure providers, along with the strength of software-as-a-service providers such as salesforce.com, NetSuite and ServiceNow will turn them into service brokers. This is how Dell positioned itself at its analyst conference last week; of course, that may well change if it bought Rackspace.

                                                                        

Cloud orchestration - will a solution come from SCM?

Clive Longbottom | No Comments
| More

Serena Software is a software change and configuration management vendor, right?  It has recently released its Dimensions CM 14 product, with additional functionality driving Serena more into the DevOps space, as well as making life easier for distributed development groups to be able to work collaboratively through synchronised libraries with peer review capabilities.

Various other improvements, such as change and branch visualisation and the use of health indicators to show how "clean" code is and where any change is in a development/operations process, as well as integrations into the likes of Git and Subversion means that Dimensions CM 14 should help many a development team as it moves from an old-style separate development, test and operations system to a more agile, process driven, automated DevOps environment.

However, it seems to me that Serena is actually sitting on something far more important.  Cloud computing is an increasing component of many an organisation's IT platform, and there will be a move away from the monolithic application towards a more composite one. By this, I mean that depending on the business' needs, an application will be built up from a set of functions on the fly to facilitate that process.  Through this means, an organisation can be far more flexible and can ensure that it adapts rapidly to changing market needs.

The concept of the composite application does bring in several issues, however.  Auditing what functions were used when is one of them.  Identifying the right functions to be used in the application is another.  Monitoring the health and performance of the overall process is another.

So, let's have a look at why Serena could be the one to offer this.

·         A composite application is made up from a set of discrete functions.  Each of these can be looked at as being an object requiring indexing and having a set of associated metadata.  Serena Dimensions CM is an object-oriented system that can build up metadata around objects in an intelligent manner.

·         Functions that are available to be used as part of a composite application need to be available from a library.  Dimensions is a library-based system.

·         Functions need to be pulled together in an intelligent manner, and instantiated as the composite application.  This is so close to a DevOps requirement that Dimensions should shine in its capabilities to carry out such a task.

·         Any composite application must be fully audited so that what was done at any one time can be demonstrated at a later date.  Dimensions has strong and complex versioning and audit capabilities, which would allow any previous state to be rebuilt and demonstrated as required at a later date.

·         Everything must be secure.  Dimensions has rigorous user credentials management - access to everything can be defined by user name, roll or function.  Therefore, the way that a composite application operates can be defined by the credentials of the individual user.

·         The "glue" between functions across different clouds needs to be put in place.  Unless cloud standards are improved drastically, getting different functions to work seamlessly together will remain difficult.  Some code will be required to ensure that Function A and Function B do work well together to facilitate Process C.  Dimensions is capable of being the centre for this code to be developed and used - and also as a library for the code to be stored and reused, ensuring that the minimum amount of time is lost in putting together a composite application as required.

Obviously, it would not be all plain sailing for Serena to enter such a market.  Its brand equity currently lies within the development market.  Serena would find itself in competition with the incumbent systems management vendors such as IBM and CA.  However, these vendors are still struggling to come to terms with what the composite application means to them - it could well be that Serena could layer Dimensions on top of existing systems to offer the missing functionality. 

Dimensions would need to be enhanced to provide functions such as the capability to discover and classify available functions across hybrid cloud environments.  A capacity to monitor and measure application performance would be a critical need - which could be created through partnerships with other vendors. 

Overall, Dimensions CM 14 is a good step forward in providing additional functionality to those in the DevOps space.  However, it has so much promise, I would like to see Serena take the plunge and see if it could move it through into a more business-focused capability.

It's all happening in the world of big data.

Clive Longbottom | 1 Comment
| More

For a relatively new market, there is a lot happening in the world of big data.  If we were to take a "Top 20" look at the technologies, it would probably read something along the lines of this week's biggest climber being Hadoop; biggest loser being relational databases and staying place being the less-schema databases.

Why?  Well, Actian announced the availability of its SQL-in-Hadoop offering.  Not just a small subset of SQL, but a very complete implementation.  Therefore, your existing staff of SQL devotees and all the tools they use can now be used against data stored in HDFS, as well as against Oracle, Microsoft SQLServer, IBM DB2 et al. 

Why is this important?  Well, Hadoop has been one of these fascinating tools that promises a lot - but only produces on this promise if you have a bunch of talented technophiles who know what they are doing.  Unfortunately, these people tend to be as rare as hen's teeth - and are picked up and paid accordingly by vendors and large companies.  Now, a lot of the power of Hadoop can be put in the hands of the average (still nicely paid) data base administrator (DBA).

The second major event that this could start to usher in is the use of Hadoop as a persistent store.  Sure, many have been doing this for some time, but at Quocirca, we have long advised that Hadoop only be used for its MapReduce capabilities with the outputs being pushed towards a SQL or noSQL database depending on the format of the resulting data, with business analytics being layered over the top of the SQL/noSQL pair.

With SQL being available directly into and out of Hadoop, new applications could use Hadoop directly, and mixed data types can be stored as SQL-style or as JSON-style constructs, with analytics being deployed against a single data store.

Is this marking the end for relational databases?  Of course not.  It is highly unlikely that those using Oracle eBusiness Suite will jump ship and go over to a Hadoop-only back end, nor will the vast majority of those running mission critical applications that currently use relational systems.  However, new applications that require large datasets being run on a linearly scalable, cost-effective, data store could well find that Actian provides them with a back end that works for them.

Another vendor that made an announcement around big data a little while back was Syncsort, which made its Ironcluster ETL engine available in AWS essentially for free - or at worst at a price where you would hardly notice it, and only get charged for the workload being undertaken.

Extract, transform and load (ETL) activities have for long been a major issue with data analytics, and solutions have grown around the issue - but at a pretty high price.  In the majority of cases, ETL tools have also only been capable of dealing with relational data - so making them pretty useless when it comes to true big data needs.

By making Ironcluster available in AWS, Syncsort is playing the elasticity card.  Those requiring an analysis of large volumes of data have a couple of choices - buy a few acres-worth of expensive in-house storage, or go to the cloud.  AWS EC2 (Elastic Compute Cloud) is a well-proven, easy access and predictable cost environment for running an analytics engine - provided that the right data can be made available rapidly.

Syncsort also makes Ironcluster available through AWS' Elastic MapReduce (EMR) platform, allowing data to be transformed and loaded directly onto a Hadoop platform.

With a visual front end and utilising an extensive library of data connectors from Syncsort's other products, Ironcluster offers users a rapid and relatively easy means of bringing together multiple different data sources across a variety of data types and creating a single data repository that can then be analysed.

Syncsort is aiming to be highly disruptive with this release - even at its most expensive, the costs are well below the costs for equivalent licence and maintenance ETL tools, and make other subscription-based service look rather expensive.

Big data is a market that is happening, but is still relatively immature in the tools that are available to deal with the data needs that underpin the analytics.  Actian and Syncsort are at the vanguard of providing new tools that should be on the shopping list of anyone serious about coming to terms with their big data needs.

The continuing evolution of EMC

Clive Longbottom | No Comments
| More

The recent EMCWorld event in Las Vegas was arguably less newsworthy for its product announcements than for the way that the underlying theme and message continues to move EMC away from the company that it was just a couple of years ago.

The new EMC II (as in "eye, eye", standing for "Information Infrastructure", although it might as well be as in Roman numerals to designate the changes on the EMC side of things) is part of what Joe Tucci, Chairman and CEO of the overall EMC Corporation calls "The Federation" of EMC II, VMware, and Pivotal.  The idea is that each company can still play to its strengths while symbiotically feeding off each other to provide more complete business systems as required.  More on this, later.

At last year's event, Tucci started to make the point that the world was becoming more software oriented, and that he saw the end result of this being the "software defined data centre" (SDDC) based on the overlap between the three main software defined areas of storage, networks and compute.  The launch of ViPR as a pretty far-reaching software defined suite was used to show the direction that EMC was taking - although as was pointed out at the time, it was more vapour than ViPR. Being slow to the flash storage market, EMC showed off its acquisition of XtremIO - but didn't really seem to know what to do with it.

On to this year.  Although hardware was still being talked about, it is now apparent that the focus from EMC II is to create storage hardware that is pretty agnostic as to the workloads thrown at it, whether this be file, object or block. XtremIO has morphed from an idea of "we can throw some flash in the mix somewhere to show that we have flash" to being central to all areas.  The acquisition of super-stealth server-side flash outfit DSSD only shows that EMC II does not believe that it has all the answers yet - but is willing to invest in getting them and integrating them rapidly.

However, the software side of things is now the obvious focus for EMC Corp.  ViPR 2 was launched and now moves from being a good idea to a really valuable product that is increasingly showing its capabilities to operate not only with EMC equipment, but across a range of competitors' kit and software environments as well.  The focus is moving from the SDDC to the software defined enterprise (SDE), enabling EMC Corp to position itself across the hybrid world of mixed platforms and clouds.

ScaleIO, EMC II's software layer for creating scalable storage based on commodity hardware underpinnings was also front and centre in many aspects.  Although hardware is still a big area for EMC Corp, it is not seen as being the biggest part of the long term future.

EMC Corp seems to be well aware of what it needs to do.  It knows that it cannot leap directly from its existing business of storage hardware with software on top to a completely next generation model of software which is less hardware dependent without stretching to breaking point its existing relationships with customers and the channel - as well as Wall Street.  Therefore, it is using an analogy of 2nd and 3rd platforms, along with a term of "digital born" to identify where it needs to apply its focus.  The 2nd Platform is where most organisations are today: client/server and basic web-enabled applications.  The 3rd Platform is where companies are slowly moving towards - one where there is high mobility, a mix of different cloud and physical compute models and an end game of on-the-fly composite applications being built from functions available from a mix of private and public cloud systems. (For anyone interested, the 1st Platform was the mainframe).

The "digital born" companies are those that have little to no legacy IT: they have been created during the emergence of cloud systems, and will already be using a mix of on-demand systems such as Microsoft Office 365, Amazon Web Services, Google and so on.

By identifying this basic mix of usage types, Tucci believes that not only EMC II, but the whole of The Federation will be able to better focus its efforts in maintaining current customers while bringing on board new ones.

I have to say that, on the whole, I agree.  EMC Corp is showing itself to be remarkably astute in its acquisitions, in how it is integrating these to create new offerings and in how it is changing from a "buy Symmetrix and we have you" company to a "what is the best system for your organisation?" one.

However, I believe that there are two major stumbling blocks.  The first is that perennial problem for vendors - the channel.  Using a pretty basic rule of thumb, I would guess that around 5% of EMC Corp's channel gets the new EMC and can extend it to push the new offerings through to the customer base.  A further 20% can be trained in a high-touch model to be capable enough to be valuable partners.  The next 40% will struggle - many will not be worth putting any high-touch effort into, as the returns will not be high enough, yet they constitute a large part of EMC Corp's volume into the market.  At the bottom, we have the 35% who are essentially box-shifters, and EMC Corp has to decide whether to put any effort into these.  To my mind, the best thing would be to work on ditching them: the capacity for such channel to spread confusion and problems in the market outweighs the margin on the revenues they are likely to bring in.

This gets me back to The Federation.  When Tucci talked about this last year, I struggled with the concept.  His thrust was that EMC Corp research had shown that any enterprise technical shopping list has no more than 5 vendors on it.  By using a Federation-style approach, he believed that any mix of the EMC, VMware and Pivotal companies could be seen as being one single entity.  I didn't, and still do not buy this.

However, Paul Maritz, CEO of Pivotal put it across in a way that made more sense.  Individuals with the technical skills that EMC Corp require could go to a large monolith such as IBM.  They would be compensated well; would have a lot of resources at their disposal; they would be working in an innovative environment.  However, they would still be working for a "general purpose" IT vendor.  By going to one of the companies in EMC Corp's Federation, in EMC II they are working for a company that specialises in storage technologies; if they go to VMware, they are working for a virtualisation specialist; for Pivotal, a big data specialist - and each has its own special culture. For many individuals, this difference is a major one. 

Sure, the devil remains in the detail, and EMC Corp is seeing a lot of new competition coming through into the market.  However, to my mind it is showing a good grasp of the problems it is facing and a flexibility and agility that belies the overall size and complexity of its corporate structure and mixed portfolio.

I await next year's event with strong interest.

Enhanced by Zemanta

Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.

Archives

Recent Comments

  • Adam: Cloud computing and BYOD go hand-in-hand. Cloud computing can make read more
  • David Chassels: Hi Clive Is the business emphasis not wrong in looking read more
  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more

Dilbert

 

 

-- Advertisement --