IBM - A new behemoth, or a wounded beast?

Clive Longbottom | No Comments
| More
IBM recently held its first major event for industry analysts since its announcement of the divestment of its x86 server and product line to Lenovo.  At the event, Tom Rosamilia (SVP IBM Systems and Technology Group) and Steve Mills (SVP, Software and Systems) and their teams provided an up-beat view of where IBM currently is and where it is going.

The discussion unsurprisingly centred around IBM's own Power 8 microprocessor technology, with forays into the mainframe and the need for cloud, analytics and mobile first viewpoints.  Storage was another area of discussion - with IBM having acquired Texas Memory Systems (TMS) a back in 2012, flash storage is pretty solidly on the roadmap.

Power 8 was presented as a major engine for Linux workloads -which it certainly is.  Speeds and feeds were bandied around to show how Power 8 wipes the floor with competitors' x86-based offerings and how the overall cost of ownership was considerably lower.  For service providers, this is fine: they are not particularly bothered as to the underlying technology provided that it does what is required at a suitably low cost.  With many platform as a service providers moving towards a Linux focus, Power 8 systems make a great deal of sense.  Indeed, when mixed with the open standards OpenStack cloud platform, the offer becomes even more compelling.

Here lies another issue.  IBM now has its own network of datacentres around the world since the acquisition of SoftLayer and is building more.  With SoftLayer, IBM has a high value cloud IaaS platform that can support OpenStack as a PaaS offering, surpassing the capabilities of plain OpenStack, and can then top this with increasing library of SaaS offerings to sit upon it.  This will undoubtedly put IBM into competition with some of the very service provider prospects it wants to sell Power 8 systems too - while pulling others (particularly in the old systems integrators camp) closer to it by enabling them to avoid the need to build their own platforms, providing them with a consistent and relatively simple stack instead.

However, from a software point of view, IBM has a cloud first mentality: any new software coming from IBM must be capable of running on its own and partner cloud systems.  Combined with a concomitant mobile focus, IBM is making a play to provide systems that can be accessed by any device through its own and its partner's clouds.  This should increase the amount of enterprise software available to anyone choosing to work with IBM and its partners.
As well as its SoftLayer offering, IBM is also providing a cloud-based version of its Jeopardy-winning system, Watson.  Watson provides a capability to offer fast and effective probability outcomes to those dealing with mixed data, and is already showing great promise in healthcare.  Watson as a Service should accelerate such capabilities in the market.

As well as the Power 8 message, IBM is pushing the mainframe as a Linux engine.  Sales of the mainframe continue to be strong, and the majority of sales now include Linux capabilities.  Although the mainframe is not an engine for everyone, it shows no sign of fading away, and will remain a core part of IBM's future.

On the storage side, IBM has released an advanced connection technology it calls Coherent Accelerator Processor Interface (CAPI).  Within the storage environment, CAPI can be used to make a flash-based storage array work as "slow memory", rather than "fast disk".  In the search for the fastest possible manner of dealing with data from persistent storage systems, companies such as Fusion-IO (now acquired by SANDisk) and, indeed IBM itself, brought in PCIe-based server-side storage.  However, these then need additional systems such as provided by Pernix Data in order to ensure that such dedicated storage does not become a point of failure in the overall system.  By intelligently bypassing large parts of a storage array's existing controller, CAPI can make a SAN array blazing fast - and CAPI in conjunction with 
Power 8 systems is looking like being a major differentiator in dealing with big data (particularly in speeding up Hadoop clusters) as well as in high performance computing (HPC) systems.

Overall, then a positive report on the "new", less x86, IBM?  I suppose so - with one major caveat.  When the announcement of the divestment of x86 systems to Lenovo was made, I assumed that this was for obvious reasons that IBM could not manufacture the systems to the same low cost base that Lenovo could.  Indeed, in discussions with Adalio Sanchez, who will transition from General Manager of System x at IBM to head up Lenovo's revamped server organisation, he expects to be able to drive considerable costs out through Lenovo's different approach and greater economy of scale in commodity components.  I did expect, however, that there would remain a strong strategic relationship between the two companies.

Although IBM will be a reseller for and provide ongoing support for Lenovo servers through its Global Business Services (GBS) arm, it will not have any say on design of systems.  Therefore, what was looking like a powerful possible capability through mixing Power 8 and x86 technologies through IBM's consolidated and converged PureFlex systems will not happen.  Sure - if you want x86, IBM can source it via Lenovo, but a fully integrated engineered converged system will not be there. One rider to this is that the PureData and some PureApp configurations will still include x86 chips - still provided and integrated directly by IBM.  However, these are far more "black box" designs - the user will not really know what chips are in there, and IBM can tune these any way it sees fit, as long as the end result works.

In the commercial end-user company space, this presents the problem - the prospect may have a lot of Linux workloads, for which Power 8 may be appealing. However, it is also likely that there will be a large number of Windows-based workloads as well.  Power 8 cannot deal with these, and so an x86 platform will be required.  With Dell, HP and others having engineered converged systems that can run Linux and Windows workloads, where is it more likely that commercial customers will place their money?

IBM has proven itself to be a fighter and pulled itself back from the brink of disaster in the 1990s.  Its current offerings are strong and it is likely to continue to do well.  Its weak spot is in the x86 space - it would do well to sit down and talk further with Lenovo as to how it can still have a strategic x86 play.

It's all about the platform

Clive Longbottom | No Comments
| More

Content sync and share systems are available from many players - Dropbox, Box, Microsoft OneDrive are just a few of the many for those who want to be able to access their files from anywhere via the cloud. 

However, the ubiquity of systems and the lack of adequate monetisation at the consumer level is making this a difficult market in which to make a profit.  Each of the vendors now has to make a better commercial play - and this may mean establishing product differentiators.

The first step has been to offer enterprise content sync and share (ECSS), where central administrators can control who has access while individuals can work cohesively as teams and groups.  Again, though, although the likes of Huddle and Accellion were early leaders  (and still differentiate by offering on-premise and hybrid systems), Box, Dropbox and Microsoft are all busy moving into the same space, and all that is happening is the base line of functionality is getting higher - differentiation is still difficult.

The trick is in making any ECSS tool completely central to how people work - making it a platform rather than a tool.  At a basic level, this means making any file activity operate via the ECSS system, rather than through the access device's file system.  File open and save actions must go directly via the cloud - this basic approach makes the tool far more central to the user.

However, this is also likely to commoditise rapidly.  Vendors still need to do more - and this then requires far more from the system.  For example, increasing intelligence around the content of files can enable greater functionality.  Indexing of documents allows full search - but this in itself is no better than many of the desktop search tools currently available.  Applying meta data to the files based on parsing the content starts to add real value - and makes each ECSS system different.

At the recent BoxWorks 2014 event in San Francisco, some pointers as to possible direction were indicated.  The basic workflow engine built in to Box is being improved.  Actions will now be taken based not only on workflow rules, but also on content. For example, data loss prevention (DLP) can be implemented via Box through checking the content of a document against rules and preventing it from being moved from one area to another or from being shared with other users who do not meet certain criteria.

Alerts can be set up - maybe a product invoice needs paying on a certain date, or a review needs to be carried out by a specific person by a certain date.  Based on the workflow engine, these events can be identified and processes triggered to enable further actions to be taken.

By controlling the meta data correctly, ECSS tools can start to move into being enterprise document (or information) management systems, and even towards full intellectual property management systems.  Maintaining versioning with unchangable creation and change date fields provides the capability to create full audit chains that are legally presentable should the need arise.  The meta data can be used to demonstrate compliance to the numerous information and information security laws and industry standards out the, such as HIPAA and ISO 27001 or 17799.  Through such means, the ECSS system becomes an enterprise investment, not just a file management tool.

To make the most of this, though, requires an active and intelligent partner channel.  The channel can bring in the required domain expertise, whether this be horizontally across areas such as data protection, or vertically in areas such as pharmaceutical, finance or oil and gas.  Box has pulled in partners such as Accenture and Tech Mahindra to help in these areas.

Partners need to be able to access the platform to add their own capabilities, though.  This requires a highly functional application programming interface (API) to be in place.  This is an area where Box has put in a great deal of work, and the API is the central means for enabling existing systems to interface and integrate into the Box environment and vice versa.

Box has a strong roadmap for adding extra capabilities.  It needs to get this out into the user domain as soon as possible in order to show prospective users how it will be differentiating itself from its immediate competitors of Dropbox, Microsoft and Google, whose marketing dollars far outstrip Box's.

Box is in the middle of that dangerous place for companies - up until now, it has been small enough to be very responsive to its customers' needs.  In the future, it needs to have a more controllable code base with fewer changes - which could make it appear to be slower in adapting to the market's needs.  By building out its platform and creating an abstracted layer in how its platform works through using its own APIs, Box can create an environment where the basic engine can be stabilised and extra functionality layered on top without impacting the core.  Through this, Box is setting up a disciplined approach for its own engineering to use its platform for innovation as it wants its partners and customers to do so. Provided it sticks to this path, it should be able to maintain a good level of responsiveness to market needs.

Box and its partners need to build more applications on top of the ECSS engine; to create a solid and active ecosystem of applications that are information-centric and have distinct enterprise value.  It also needs to make a better job of showing how well it integrates with existing content management systems, such as Documentum and OpenText to create even more value, democratising information management away from the few to the whole of an organisation and its consultants, contractors, suppliers and customers.

It is likely that over the next year, some of the existing file sync and share vendors will cease to exist as they fail to adapt to the movement of others in their markets.  Box has the capacity and capabilities to continue to be a major player in the market - it just needs to make sure that it focuses hard on the enterprise needs and plays the platform card well.

Time and place: related, or inextricably linked?

Clive Longbottom | No Comments
| More

High rainfall over the last week has led to flooding.  Last night, there was a large number of burglaries.  An escape of toxic gases this morning has led to emergency services requesting everyone to evacuate their premises.  There are billions of barrels of crude oil that can be recovered over the next decade.

Notice something missing with all of this?  They may all be factually correct; they all discuss time - and yet they are all pretty useless to the reader due to one small thing missing - the "where?" aspect.

For example, I live in Reading in the UK - if the flooding is taking place in Australia, it may be sad, but I do not need to take any steps myself to avoid the floods.  If the burglaries are close to my house, I may want to review my security measures.  Likewise, if there is a cloud of toxic gas coming my way, I may want to head for the hills.  An oil and gas company is not going to spend billions of dollars in digging holes in the hope of finding oil - they need to have a good idea of where to drill in the first place.

And the examples go on - retail stores figuring out where and when to build their next outlet; utility companies making sure that they do not dig through another company's services; organisations with complex supply chains needing to ensure that the right goods get to the right place at the right time; public sector bodies needing to pull together trends in needs across broad swathes of citizens across different areas of the country.

The need for accurate and granular geo-specific data that can add distinct value to existing data sets has never been higher. As the internet of everything (IoE) becomes more of a reality, this will only become more of a pressing issue.  The next major battle ground will be around the capability to overlay geo-specific data from fixed and mobile monitors and devices onto other data services, creating the views required by different groups of people in the organisation in order to add extra value.

I was discussing all of this with one of the major players in the geographic information systems (GIS) market, Esri.  Esri has spent many years and a lot of money in building up its skills in understanding how geographic data and other data sets need to work contextually together.  Through using a concept of layers, specific data can be applied as required to other data, whether this be internal data from the organisation's own applications, data sets supplied by Esri and its partners, or external data sets from other sources.

The problem for vendors such as Esri though is the simplistic perception of location awareness that there is in the market.  Vendors such as Esri and MapInfo along with content providers including the Ordnance Survey and Experian and others are perceived as purely mapping players - maybe as a Google Maps on steroids.  This minimises the actual value that can be obtained from the vendors - and stops many organisations from digging deeper as to what can be provided.

For example, the end result of a geolocation analysis may not be a visual graph at all.  Take the insurance industry, for example.  You provide them with your postcode, they pull in a load of other data sets looking at crime in your area, likelihood of flooding, number of claims already made by neighbours, possibility of fraud, and out pops a number - say, £100 for a low risk insurance prospect, £5,000 for a high risk.  Neither the insurance agent nor the customer has seen any map, yet everything was dependent on a full understanding of the geographical "fix" of the data point, and each layer of data only had that point in common.  Sure, time would also have been needed to be taken into account - this makes it two fixed points, which could be analysed to reach an informed and more accurate decision.

The key for the GIS players now is to move far more to being a big data play that can be seen by prospects as a more key part of their needs.  Esri seems to understand this - it has connectors and integrations into most other systems, such that other data sources can be easily used, but also so that other business intelligence front ends can be used if a customer so wishes.

So, what's the future for GIS?  In itself, probably just "more of the same".  As part of a big data/internet of everything approach, geographic data will be one of the major underpinnings for the majority of systems.  When combined with time, place helps to provide a fixed point of context around which variable data can be better understood.  It is likely that the GIS vendors will morph into being bundled with the big data analytic players - but as ones with requisite domain expertise in specific verticals.

As the saying goes, there is a time and place for everything: when it comes to a full big data analytics approach, for everything, there is a time and place. Or, as a colleague said, maybe the time for a better understanding of the importance of place is now.

It's in the net: The value of big data

Clive Longbottom | No Comments
| More
Talking with a journalist friend of mine a few weeks back, we got talking about how to possibly place some actual hard pounds and pence value on data.  It got me thinking - and this is my take on it.

A reasonable example would be the UK Premiership football/soccer league - understood by a large enough number of people to make the analogies useful (hopefully).

Let's just start with a single data point.  Manchester United is a football team.  This is pretty incontrovertible - but it has little value in itself.  We can add immediate other data points to it, such as that it is a Premiership team, its home ground is Old Trafford, its strip is red etc.

This starts to build more of a picture - but still has little value.

We can then start to add other data to start to create possible value.  Over the past 10 years, Manchester United has won the Premiership title 5 times.  It has won the FA cup once, the league cup 3 times and the FIFA World Club Cup once.  A pretty good track record, then.

After the retirement of its long-term manager, Sir Alex Ferguson in 2012, David Moyes took over for one season - and United did not fare well, as players struggled to come to terms with a new regime.  Moyes was sacked and former Ajax, Bayern Munich, Barcelona and Dutch national team coach Louis van Gaal took over. van Gaal is looking to make major changes to the team, both through transfers and in the way the players are managed and trained. The results of these transfers will probably be known when you read this piece. A firm hand on the tiller could start to steer United back to winning ways.

Forbes estimates that Manchester United's brand value is around $739m (having fallen from $837m, due to the fall in playing fortunes in the 2013/14 season).  Forbes also estimates the "team" value (based on equity plus debt values) at $2.8b.  This makes it the world's third most valuable soccer club, behind Real Madrid and Barcelona.  So - deep pockets, and a money making machine.

The club claims to have 659 million fans around the globe, has nearly 3 million followers on Twitter and 54 million likes on Facebook. Wow - lots of eyeballs and merchandising opportunities.

In its first quarter 2014 financial results, it announced that merchandise and licensing revenues were up by 13.8%; that sponsorship revenues were up by 62.6% and that broadcasting revenues were up by 40.9%.  This all led to the quarter's revenues being up by 29.1% overall at £98.5m with an EBITDA being up by 36.2%. A bad season doesn't seem to have hit it at the bottom line overall.

The owners of the club, the American Glazer family of Malcolm and his six children, gained control of the club by borrowing money through payment in kind deals through an external company.  However, many of the loans are guaranteed against Manchester United assets.  In 2012, the Glazers sold 10% of the overall shares in the club, followed by a further 5% after Malcolm Glazer's death in May 2014. Opportunities are there to buy in to the club through share ownership - and to build up a decent holding if wanted. A leveraged buy-out that is now being sold back to the markets: not so much of a risk now.

Now we are getting somewhere.  We've brought together data from all sorts of different environments that starts to build up a more meaningful picture.

As a supporter, we have some idea of new direction:  van Gaal has a good track record; he is strict and is likely to come down hard on players who felt that they could pay little attention to Moyes with an attitude of "Sir Alex didn't do it that way".  van Gaal is unlikely to see the 2014/15 season as a turnaround year - he has to prove to all concerned that United is back on track.

For investors, the poor 2013/14 season did have an impact - brand value is down, and overall playing revenues will be hit as United will not be playing in Europe this season.  The supporters have proven to be loyal, and merchandise is still selling well.  However, new sponsors are on board with long-term deals, and the overall books are still looking strong.  
Now - this has just been about Manchester United.  There are 19 other clubs in the Premiership, and the same analysis can be carried out against each one.  Further granularity can be added by analysing at the individual player level; at coaching team level; at commercial team level.  The findings can then be compared and contrasted to give indicators of how the clubs are likely to perform at a sports and a financial level.

This is how big data works - it brings together little bits of unconnected data and creates an overall story that has different values depending on how you look at it.

Does it result in something where you can say "this is worth this much"?  No - but then again, very little in life does allow for such certainty.  As long as sufficient data is pulled together from sufficient sources and is then analysed in the right way, it should be enough to say "this finding will give me a strong chance of greater value".

The worst thing that you can say to a United supporter is that football is "only a game".  Bill Shankly, a former manager of United's arch enemies, Liverpool FC, once said "Some people believe football is a matter of life and death, I am very disappointed with that attitude. I can assure you it is much, much more important than that."  

Football ceased to be just a game many years ago - it is now a major commercial business, where getting anything wrong can have major long-term impact on earning capabilities, and therefore club survival.  Shankly - speaking well before the use of big data analytics - may well have been right.

The security and visibility of critical national infrastructure: ViaSat's mega-SIEM

Bob Tarzey | No Comments
| More

There has been plenty of talk about the threat of cyber-attacks on critical national infrastructure (CNI). So what's the risk, what's involved in protecting CNI and why, to date, do attacks seem to have been limited?

 

CNI is the utility infrastructure that we all rely on day-to-day; national networks such as electricity grids, water supply systems and rail tracks. Others have an international aspect too, for example gas pipelines are often fed by cross-border suppliers. In the past such infrastructure has been often been owned by governments, but much has now been privatised.

 

Some CNI has never been in government hands, mobile phone and broadband networks have largely emerged after the telco monopolies were scrapped in the 1980s. The supply chains of major supermarkets have always been a private matter, but they are very reliant on road networks, an area of CNI still largely in government hands.

 

The working fabric of CNIs is always a network of some sort; pipes, copper wires, supply chains, rails, roads: keeping it all running requires network communications. Before the widespread use of the internet this was achieve through propriety, dedicated and largely isolated networks. Many of these are still in place. However, the problem is that they have increasingly become linked to and/or enriched by internet communications. This makes CNIs part of the nebulous thing we call cyber-space; which is predicted to grow further and faster with the rise of the internet-of-things (IoT).

 

Who would want to attack CNI? Perhaps terrorists, however, some point out that it is not really their modus operandi, regional power cuts being less spectacular that flying planes in to buildings. CNI could become a target in nation state conflicts, perhaps a surreptitious attack where there is no kinetic engagement (a euphemism for direct military conflict), some say this is already happening, for example, the Stuxnet malware that targeted Iranian nuclear facilities.

 

Then there is cybercrime. Poorly protected CNI devices may be used to gain entry to computer networks with more value to criminals. In some case devices could be recruited to botnets, again this is already thought to have happened with IoT devices. Others may be direct targets, for example tampering with electricity meters or stealing data from point-of-sales (PoS) devices that are the ultimate front end of many retail supply chains.

 

Who is ultimately responsible for CNI security? Should it be governments? After all, many of us own the homes we live in, but we expect government to run defence forces to protect our property from foreign invaders. Government also passes down security legislation, for example at airports and other mandates are emerging with regards to CNI. However, at the end of the day it is in the interests of CNI providers to protect their own networks, for commercial reasons as well as in the interests of security. So, what can be done?

 

Securing CNI

One answer is of course, CNI network isolation. However, this simply not practical, laying private communications networks is expensive and innovations like smart metering are only practical because existing communications technology standards and networks can be used. Of course, better security can be built into to CNIs in the first place, but this will take time, many have essential components that were installed decades ago.

 

A starting point would be better visibility of the overall network in the first place and ability to collect inputs from devices and record events occurring across CNI networks.  If this sounds like a kind of SIEM (security information and event management) system, along the lines of those provide for IT networks by LogRhythm, HP, McAfee, IBM and others, then that is because it is; a mega-SIEM for the huge scale of CNI networks. This is the vision behind ViaSat's Critical Infrastructure Protection. ViaSat is now extending sales of the service from USA to Europe.

The service involves installing monitors and sensors across CNI networks, setting base lines for known normal operations and looking for the absence of the usual and the presence of the unusual. ViaSat can manage the service for its customers out of its own security operations centre (SOC) or provide customers with their own management tools.  Sensors are interconnected across an encrypted, IP fabric, which allows for secure transmission of results and commands to and from the SOC. Where possible the CNI's own fabric is used for communications, but if necessary this can be supplemented with internet communications; in other words the internet can be recruited to help protect CNI as well as attack it.

Having better visibility of any network not only helps improve security, but enables other improvements to be made through better operational intelligence. ViaSat says it is already doing this for its customers. The story sounds similar to one told in a recent Quocirca research report, Masters of Machines that was sponsored by Splunk. Splunk's back ground is SIEM and IT operational intelligence, which, as the report shows, is increasingly being used to provide better commercial insight into IT driven business processes.

As it happens ViaSat already uses Splunk as a component of its SOC architecture. However, Splunk has ambitions in the CNI space too, some of it customers are already using its products to monitor and report on industrial systems. Some co-opetition will surely be good thing as the owners of CNIs seek to run and secure them better for the benefit of their customers and in the interests of national security.







Do increasing worries about insider threats mean it is time to take another look at DRM?

Bob Tarzey | No Comments
| More

The encryption vendor SafeNet publishes a Breach Level Index which records actual reported incidents of data loss. Whilst the number of losses attributed to malicious outsiders (58%) exceeds those attributed to malicious insiders (13%), SafeNet claims that insiders account for more than half of the actual information lost. This is because insiders will also be responsible for all the accidental losses that account for a further 26.5% of incidents and the stats do not take into account the fact that many breaches caused by insiders will go unreported. The insider threat is clearly something that organisations need to guard against to protect their secrets and regulated data.

Employees can be coached to avoid accidents and technology can support this. Intentional theft is harder to prevent, whether it is for reasons of personal gain, industrial espionage or just out of spite. According to Verizon's Data Breach Investigations Report, 70% of the thefts of data by insiders are committed within thirty days of an employee resigning from their job, suggesting they plan to a take data with them to their new employer. Malicious insiders will try to find a way around the barriers put in place to protect data; training may even serve to provide useful pointers about how to go about it.

Some existing security technologies have a role to play in protecting against the insider threat. Basic access controls built into data stores, linked to identity and access (IAM) management systems are a good starting point, encryption of stored data strengthens this helping to ensure only those with the necessary rights can access data in the first place. In addition, there have been many implementations of data loss prevention (DLP) systems in recent years; these monitor the movement of data over networks and alert when content is going somewhere it shouldn't and, if necessary, blocks it.

However, if a user has the rights to access data, and indeed to create it in the first place, then these systems do not help, especially if the user is to be trusted to use that data on remote devices. To protect data at all times controls must extend to wherever the data is. It is to this end that renewed interest is being taken in digital rights management (DRM). In the past issues such as scalability and user acceptance have held many organisations back from implementing DRM. That is something DRM suppliers such as Fasoo and Verdasys have sought to address.

DRM, as with DLP, requires all documents to be classified from the moment of creation and monitored throughout their life cycle. With DRM user actions are controlled through an online policy server, which is referred to each time a sensitive document is accessed. So, for example, a remote user can be prevented from taking actions on a given document such as copying or printing; documents can only be shared with other authorised users. Most importantly an audit trail of who has done what to a document, and when, is collected and managed at all stages.

Just trusting employees would be cheaper and easier than implementing more technology. However, it is clear that this is not a strategy businesses can move forward with. Even if they are prepared to take risk with their own intellectual property regulators will not accept a casual approach when it comes to sensitive personal and financial data. If your organisation cannot be sure what users are doing with its sensitive data at all times, perhaps it is time to take a look at DRM.

Quocirca's report "What keeps your CEO up at night? The insider threat: solved with DRM", is freely available here.

Top 10 characteristics of high performing MPS providers

Louella Fernandes | No Comments
| More

Quocirca's research reveals that almost half of enterprises plan to expand their use of managed print services (MPS). MPS has emerged as a proven approach to reducing operational costs and improving the efficiency and reliability of a business's print infrastructure at a time when in-house resources are increasingly stretched. 

Typically, the main reasons organisations turn to MPS are cost reduction, predictability of expenses and service reliability. However they may also benefit from implementation of solutions such as document workflow, mobility and business process automation, to boost collaboration and productivity among their workforce. MPS providers can also offer businesses added value through transformation initiatives that support revenue and profit growth. MPS providers include printer/copier manufacturers, systems integrators and managed IT service providers. As MPS evolves and companies increase their dependence on it, whatever a provider's background, it's important that they can demonstrate their credibility across a range of capabilities. The following are key criteria to consider when selecting an MPS provider:

  1. Strong focus on improving customer performance - In addition to helping customers improve the efficiency of their print infrastructure, leading MPS providers can help them drive transformation and increase employee productivity as well as supporting revenue growth.  An MPS provider should understand the customer's business and be able to advise them on solutions that can be implemented to improve business performance, extend capabilities and reach new markets.
  2. A broad portfolio of managed services - Many organisations may be using a variety of providers for their print and IT services. However managing multiple service providers can also be costly and complex. For maximum efficiency, look for a provider with a comprehensive suite of services which cover office and production printing, IT services and business process automation.  As businesses look more to 'as-a-service' options for software implementation,consider MPS providers with strong expertise across both on-premise and cloud delivery models.
  3. Consistent global service delivery with local support - Global delivery capabilities offer many advantages, including rapid implementation in new locations and the ability to effectively manage engagements across multiple countries. However it's also important that a provider has local resources with knowledge of the relevant regulatory and legal requirements. Check whether an MPS provider uses standard delivery processes across all locations and how multi-location teams are organised and collaborate.
  4. Proactive continuous improvement - An MPS provider must go beyond a break/fix model to offer proactive and pre-emptive support and maintenance. As well as simple device monitoring they should offer advanced analytics that can drive proactive support and provide visibility into areas for on-going improvement.
  5. Strong multivendor support - Most print infrastructures are heterogeneous environments comprising hardware and software from a variety of vendors, so MPS providers should have proven experience of working in multivendor environments. A true vendor-agnostic MPS provider should play the role of trusted technology advisor, helping an organisation select the technologies that best support their business needs. Independent MPS providers should also have partnerships with a range of leading vendors, giving them visibility of product roadmaps and emerging technologies.
  6. Flexibility - Businesses will always want to engage with MPS in a variety of different ways. Some may want to standardise on a single vendor's equipment and software, while others may prefer multivendor environments. Some may want a provider to take full control of their print infrastructure while others may only want to hand over certain elements. And some may want to mix new technology with existing systems so they can continue to leverage past investments. Leading MPS providers offer flexible services that are able to accommodate such specific requirements. Flexible procurement and financial options are also key, with pricing models designed to allow for changing needs.
  7. Accountability - Organisations are facing increased accountability demands from shareholders, regulators and other stakeholders. In turn, they are demanding greater accountability from their MPS providers. A key differentiator for leading MPS providers is ensuring strong governance of MPS contracts, and acting as a trusted, accountable advisor, making recommendations on the organisation's technology roadmap. MPS providers must be willing to meet performance guarantees through contractual SLAs, with financial penalties for underperformance. They should also understand the controls needed to meet increasingly complex regulatory requirements.
  8. Full service transparency - Consistent service delivery is built on consistent processes that employ a repeatable methodology. Look for access to secure, web-based service portals with dashboards that provide real-time service visibility and flexible reporting capabilities.
  9. Alignment with standards - An MPS provider should employ industry best practices, in particular aligning with the ITIL approach to IT service management. ITIL best practices encompass problem, incident, event, change, configuration, inventory, capacity and performance management as well as reporting.
  10. Innovation - Leading MPS providers demonstrate innovation. This may include implementing emerging technologies and new best practices and continually working to improve service delivery and reduce costs. Choose a partner with a proven track record of innovation. Do they have dedicated research centres or partnerships with leading technology players and research institutions? You should also consider how a prospective MPS provider can contribute to your own company's innovation and business transformation strategy. Bear in mind that innovation within any outsourcing contract may come at a premium - this is where gain-sharing models may be used.

Ultimately, businesses are looking for more than reliability and cost reduction from their MPS provider. Today they also want access to technologies that can increase productivity and collaboration and give them a competitive advantage as well as help with business transformation. By ensuring a provider demonstrates the key characteristics above before committing, organisations can make an informed choice and maximise the chances of a successful engagement. Read Quocirca's Managed Print Services Landscape, 2014

What is happening to the boring world of storage?

Clive Longbottom | No Comments
| More

Storage suddenly seems to have got interesting again.  As the interest moves from increasing spin speeds and applying ever more intelligent means to get a disk head over the right part of a disk in the fastest possible time to flash-based systems where completely different approaches can be taken, a feeding frenzy seems to be underway.  The big vendors are in high-acquisition mode, while the new kids on the block are mixing things up and keeping the incumbents on their toes.

After the acquisitions of Texas Memory Systems (TMS) by IBM, Whiptail by Cisco and XtremIO by EMC in 2012, it may have looked like it was time for a period of calm reflection and the full integration of what they had acquired.  However, EMC acquired ScaleIO and then super-stealth server-side flash company DSSD to help it create a more nuanced storage portfolio capable of dealing with multiple different workloads on the same basic storage architecture.

Pure Storage suddenly popped up and signed a cross-licensing and patent agreement with IBM, with Pure acquiring over 100 storage and related patents from IBM, with Pure stating that this was a defensive move to protect itself from any patent trolling by other companies (or shell companies).  However, it is also likely that IBM will gain some technology benefits from the cross-licensing deal.  At the same time as the IBM deal, Pure also acquired other patents to bolster its position.

SanDisk acquired Fusion-io, another server-side flash pioneer.  More of a strange acquisition, this one - Fusion-io would have been more of a fit for a storage array vendor looking to extend its reach into converged fabric through PCIe storage cards.  SanDisk will now have to forge much stronger links with motherboard vendors - or start to manufacture its own motherboards - to make this acquisition work well.  However, Western Digital had also been picking up flash vendors, such as Virident (itself a PCIe flash vendor), sTec and VeloBit; Seagate acquired the SSD and PCIe parts of Avago - maybe SanDisk wanted to be seen to be doing something.

Then we have Nutanix: a company that started off as marketing itself as a scale-out storage company but was actually far more of a converged infrastructure player.  It has just inked a global deal with Dell, where Dell will license Nutanix' web-scale software to run on Dell's own converged architecture systems.  This deal gives a massive boost to Nutanix: it gains access to the louder voice and greater reach of Dell, while still maintaining its independence in the market.

Violin Memory has not been sitting still either.  A company that has always had excellent technology based on moving away from the concept of the physical disk drive, it uses a PCI-X in-line memory module approach (which it calls VIMMs) to provide all-flash based storage arrays.  However, it did suffer from being a company with great hardware but little in the way of intelligent software.

After its IPO, it found that it needed a mass change in management staff, and under a new board and other senior management, Quocirca is seeing some massive changes in its approach to its product portfolio.  Firstly, Violin brought the Windows Flash Array (WFA) to market - far more of an appliance than a storage array.  Now, it has launched its Concerto storage management software as part of its 7000 all flash array.  Those who have already bought the 6000 array can choose to upgrade to a Concerto-managed system in-situ.

Violin has, however, decided that PCIe storage is not for it - it has sold off that part of its business to SK Hynix.

The last few months have been hectic in the storage space.  For buyers, it is a dangerous time - it is all too easy to find yourself with high-cost systems that are either superseded and unsupported all too quickly or where the original vendor is acquired or goes bust leaving you with a dead-end system.  There will also be continued evolution of systems to eke out those extra bits of performance, and a buyer now may not be able to deal with these changes through abstracting everything through a software defined storage (SDS) layer.

However, flash storage is here to stay.  At the moment, it is tempting to choose flash systems for specific workloads where you know that you will be replacing the systems within a relatively short period of time anyway. This is likely to be mission critical latency-dependent workloads where the next round of investment in the next generation of low-latency, high-performance storage can be made within 12-18 months. Server-side storage systems using PCIe cards should be regarded as highly niche for the moment: it will be interesting to see what EMC does with DSSD and what Western Digital and SanDisk do with their acquisitions, but for the moment, the lack of true abstraction of PCIe (apart from via software from the likes of PernixData).

For general storage, the main storage vendors will continue to move from all spinning to hybrid and then to all flash arrays over time - it is probably best to just follow the crowd here for the moment.

Cloud infrastructure services, find a niche or die?

Bob Tarzey | No Comments
| More

Back in May it was reported that Morgan Stanley had been appointed to explore options for the sale of hosted? services provider Rackspace. Business Week, May 16th reported the story with the headline Who Might Buy Rackspace? It's a Big List. 24/7 WALLST reported analysis from Credit Suisse that narrowed this to three potential suitors; Dell, Cisco and HP.

To cut a long story short, Rackspace sees a tough future competing with the big three in the utility cloud market; Amazon, Google and Microsoft. Rackspace could be attractive to Dell, Cisco, HP and other traditional IT infrastructure vendors, that see their core business being eroded by the cloud and need to build out their own offerings (as does IBM which has already made significant acquisitions).

Quocirca sees another question that needs addressing. If Rackspace, one of the most successful cloud service providers, sees the future as uncertain in the face of competition from the big three, then what of the myriad of smaller cloud infrastructure providers? For them the options are twofold.

Be acquired or go niche

First achieve enough market penetration to become an attractive acquisition target for the larger established vendors that want to bolster their cloud portfolios. As well as the IT infrastructure vendors this includes communications providers and system integrators.

Many have already been acquisitive in the cloud market. For example the US number three carrier CenturyLink buying Savvis, AppFog and Tier-3 and NTT's system integrator arm Dimension Data added to existing cloud services with OpSource and BlueFire. Other cloud service providers have merged to beef up their presence, for example Claranet and Star.

The second option for smaller provider is to establish a niche, where the big players will find it hard to compete. There are a number of cloud providers that are already doing quite well at this, they rely on a mix of geographic, application or industry specialisation. Here are some examples: 

Exponential E - highly integrated network and cloud services

Exponential-E's background is as a UK focussed virtual private network provider, using its own cross London metro-network and services from BT. In 2010 the vendor moved beyond networking to provide infrastructure-as-a-service. Its differentiator is to embed this into in to its own network services at network level 2 (switching etc.) rather than higher levels. Its customers get the security and performance that would be expected from internal WAN based deployments that cannot be achieved for cloud services accessed over the public internet.

City Lifeline - in finance latency matters

City Lifeline's data centre is shoe-horned in an old building near Moorgate in central London. Its value proposition is low latency for which it charges a premium over out of town premises for its proximity to the big city institutions.

Eduserve - governments like to know who they are dealing with

For reasons of compliance, ease of procurement and security of tenure, government departments in any country like to have some control over their suppliers, this includes the procurement of cloud services. Eduserv is a not for profit long-term supplier of consultancy and managed services to the UK government and charity organisations. In order to help its customers deliver better services, Eduserve has developed cloud infrastructure offerings out of its own data centre in the central south UK town of Swindon. As a UK G-Cloud partner it has achieved IL3 security accreditation enabling it to host official government data. Eduserve provides value added services to help customers migrate to cloud, including cloud adoption assessments, service designs and on-going support and management. 

Firehost - performance and security for payment processing

Considerable rigour needs to go into building applications for processing highly secure data for sectors such as financial services and healthcare. This rigour must also extend to the underlying platform. Firehost has built an IaaS platform to targets these markets. In the UK its infrastructure is co-located with Equinix, ensuring access to multiple high speed carrier connections. Within such facilities, Firehost applies its own cage level physical security. Whilst infrastructure is shared it maintains the feel a private cloud, with enhanced security through protected VMs with built in web application firewall, DDoS protection, IP reputation filtering and two factor authentication for admin access.

Even for these providers the big three do not disappear. In some cases their niche capability may simply see them bolted on to bigger deployments, for example, a retailer off-loading its payment application to a more secure environment. In other cases, existing providers are staring to offer enhanced services around the big three to extend in-house capability, for example UK hosting provider Attenda now offers services around Amazon Web Services (AWS).

For many IT service providers the growing dominance of the big three cloud infrastructure providers, along with the strength of software-as-a-service providers such as salesforce.com, NetSuite and ServiceNow will turn them into service brokers. This is how Dell positioned itself at its analyst conference last week; of course, that may well change if it bought Rackspace.

                                                                        

Cloud orchestration - will a solution come from SCM?

Clive Longbottom | No Comments
| More

Serena Software is a software change and configuration management vendor, right?  It has recently released its Dimensions CM 14 product, with additional functionality driving Serena more into the DevOps space, as well as making life easier for distributed development groups to be able to work collaboratively through synchronised libraries with peer review capabilities.

Various other improvements, such as change and branch visualisation and the use of health indicators to show how "clean" code is and where any change is in a development/operations process, as well as integrations into the likes of Git and Subversion means that Dimensions CM 14 should help many a development team as it moves from an old-style separate development, test and operations system to a more agile, process driven, automated DevOps environment.

However, it seems to me that Serena is actually sitting on something far more important.  Cloud computing is an increasing component of many an organisation's IT platform, and there will be a move away from the monolithic application towards a more composite one. By this, I mean that depending on the business' needs, an application will be built up from a set of functions on the fly to facilitate that process.  Through this means, an organisation can be far more flexible and can ensure that it adapts rapidly to changing market needs.

The concept of the composite application does bring in several issues, however.  Auditing what functions were used when is one of them.  Identifying the right functions to be used in the application is another.  Monitoring the health and performance of the overall process is another.

So, let's have a look at why Serena could be the one to offer this.

·         A composite application is made up from a set of discrete functions.  Each of these can be looked at as being an object requiring indexing and having a set of associated metadata.  Serena Dimensions CM is an object-oriented system that can build up metadata around objects in an intelligent manner.

·         Functions that are available to be used as part of a composite application need to be available from a library.  Dimensions is a library-based system.

·         Functions need to be pulled together in an intelligent manner, and instantiated as the composite application.  This is so close to a DevOps requirement that Dimensions should shine in its capabilities to carry out such a task.

·         Any composite application must be fully audited so that what was done at any one time can be demonstrated at a later date.  Dimensions has strong and complex versioning and audit capabilities, which would allow any previous state to be rebuilt and demonstrated as required at a later date.

·         Everything must be secure.  Dimensions has rigorous user credentials management - access to everything can be defined by user name, roll or function.  Therefore, the way that a composite application operates can be defined by the credentials of the individual user.

·         The "glue" between functions across different clouds needs to be put in place.  Unless cloud standards are improved drastically, getting different functions to work seamlessly together will remain difficult.  Some code will be required to ensure that Function A and Function B do work well together to facilitate Process C.  Dimensions is capable of being the centre for this code to be developed and used - and also as a library for the code to be stored and reused, ensuring that the minimum amount of time is lost in putting together a composite application as required.

Obviously, it would not be all plain sailing for Serena to enter such a market.  Its brand equity currently lies within the development market.  Serena would find itself in competition with the incumbent systems management vendors such as IBM and CA.  However, these vendors are still struggling to come to terms with what the composite application means to them - it could well be that Serena could layer Dimensions on top of existing systems to offer the missing functionality. 

Dimensions would need to be enhanced to provide functions such as the capability to discover and classify available functions across hybrid cloud environments.  A capacity to monitor and measure application performance would be a critical need - which could be created through partnerships with other vendors. 

Overall, Dimensions CM 14 is a good step forward in providing additional functionality to those in the DevOps space.  However, it has so much promise, I would like to see Serena take the plunge and see if it could move it through into a more business-focused capability.







It's all happening in the world of big data.

Clive Longbottom | 1 Comment
| More

For a relatively new market, there is a lot happening in the world of big data.  If we were to take a "Top 20" look at the technologies, it would probably read something along the lines of this week's biggest climber being Hadoop; biggest loser being relational databases and staying place being the less-schema databases.

Why?  Well, Actian announced the availability of its SQL-in-Hadoop offering.  Not just a small subset of SQL, but a very complete implementation.  Therefore, your existing staff of SQL devotees and all the tools they use can now be used against data stored in HDFS, as well as against Oracle, Microsoft SQLServer, IBM DB2 et al. 

Why is this important?  Well, Hadoop has been one of these fascinating tools that promises a lot - but only produces on this promise if you have a bunch of talented technophiles who know what they are doing.  Unfortunately, these people tend to be as rare as hen's teeth - and are picked up and paid accordingly by vendors and large companies.  Now, a lot of the power of Hadoop can be put in the hands of the average (still nicely paid) data base administrator (DBA).

The second major event that this could start to usher in is the use of Hadoop as a persistent store.  Sure, many have been doing this for some time, but at Quocirca, we have long advised that Hadoop only be used for its MapReduce capabilities with the outputs being pushed towards a SQL or noSQL database depending on the format of the resulting data, with business analytics being layered over the top of the SQL/noSQL pair.

With SQL being available directly into and out of Hadoop, new applications could use Hadoop directly, and mixed data types can be stored as SQL-style or as JSON-style constructs, with analytics being deployed against a single data store.

Is this marking the end for relational databases?  Of course not.  It is highly unlikely that those using Oracle eBusiness Suite will jump ship and go over to a Hadoop-only back end, nor will the vast majority of those running mission critical applications that currently use relational systems.  However, new applications that require large datasets being run on a linearly scalable, cost-effective, data store could well find that Actian provides them with a back end that works for them.

Another vendor that made an announcement around big data a little while back was Syncsort, which made its Ironcluster ETL engine available in AWS essentially for free - or at worst at a price where you would hardly notice it, and only get charged for the workload being undertaken.

Extract, transform and load (ETL) activities have for long been a major issue with data analytics, and solutions have grown around the issue - but at a pretty high price.  In the majority of cases, ETL tools have also only been capable of dealing with relational data - so making them pretty useless when it comes to true big data needs.

By making Ironcluster available in AWS, Syncsort is playing the elasticity card.  Those requiring an analysis of large volumes of data have a couple of choices - buy a few acres-worth of expensive in-house storage, or go to the cloud.  AWS EC2 (Elastic Compute Cloud) is a well-proven, easy access and predictable cost environment for running an analytics engine - provided that the right data can be made available rapidly.

Syncsort also makes Ironcluster available through AWS' Elastic MapReduce (EMR) platform, allowing data to be transformed and loaded directly onto a Hadoop platform.

With a visual front end and utilising an extensive library of data connectors from Syncsort's other products, Ironcluster offers users a rapid and relatively easy means of bringing together multiple different data sources across a variety of data types and creating a single data repository that can then be analysed.

Syncsort is aiming to be highly disruptive with this release - even at its most expensive, the costs are well below the costs for equivalent licence and maintenance ETL tools, and make other subscription-based service look rather expensive.

Big data is a market that is happening, but is still relatively immature in the tools that are available to deal with the data needs that underpin the analytics.  Actian and Syncsort are at the vanguard of providing new tools that should be on the shopping list of anyone serious about coming to terms with their big data needs.

The continuing evolution of EMC

Clive Longbottom | No Comments
| More

The recent EMCWorld event in Las Vegas was arguably less newsworthy for its product announcements than for the way that the underlying theme and message continues to move EMC away from the company that it was just a couple of years ago.

The new EMC II (as in "eye, eye", standing for "Information Infrastructure", although it might as well be as in Roman numerals to designate the changes on the EMC side of things) is part of what Joe Tucci, Chairman and CEO of the overall EMC Corporation calls "The Federation" of EMC II, VMware, and Pivotal.  The idea is that each company can still play to its strengths while symbiotically feeding off each other to provide more complete business systems as required.  More on this, later.

At last year's event, Tucci started to make the point that the world was becoming more software oriented, and that he saw the end result of this being the "software defined data centre" (SDDC) based on the overlap between the three main software defined areas of storage, networks and compute.  The launch of ViPR as a pretty far-reaching software defined suite was used to show the direction that EMC was taking - although as was pointed out at the time, it was more vapour than ViPR. Being slow to the flash storage market, EMC showed off its acquisition of XtremIO - but didn't really seem to know what to do with it.

On to this year.  Although hardware was still being talked about, it is now apparent that the focus from EMC II is to create storage hardware that is pretty agnostic as to the workloads thrown at it, whether this be file, object or block. XtremIO has morphed from an idea of "we can throw some flash in the mix somewhere to show that we have flash" to being central to all areas.  The acquisition of super-stealth server-side flash outfit DSSD only shows that EMC II does not believe that it has all the answers yet - but is willing to invest in getting them and integrating them rapidly.

However, the software side of things is now the obvious focus for EMC Corp.  ViPR 2 was launched and now moves from being a good idea to a really valuable product that is increasingly showing its capabilities to operate not only with EMC equipment, but across a range of competitors' kit and software environments as well.  The focus is moving from the SDDC to the software defined enterprise (SDE), enabling EMC Corp to position itself across the hybrid world of mixed platforms and clouds.

ScaleIO, EMC II's software layer for creating scalable storage based on commodity hardware underpinnings was also front and centre in many aspects.  Although hardware is still a big area for EMC Corp, it is not seen as being the biggest part of the long term future.

EMC Corp seems to be well aware of what it needs to do.  It knows that it cannot leap directly from its existing business of storage hardware with software on top to a completely next generation model of software which is less hardware dependent without stretching to breaking point its existing relationships with customers and the channel - as well as Wall Street.  Therefore, it is using an analogy of 2nd and 3rd platforms, along with a term of "digital born" to identify where it needs to apply its focus.  The 2nd Platform is where most organisations are today: client/server and basic web-enabled applications.  The 3rd Platform is where companies are slowly moving towards - one where there is high mobility, a mix of different cloud and physical compute models and an end game of on-the-fly composite applications being built from functions available from a mix of private and public cloud systems. (For anyone interested, the 1st Platform was the mainframe).

The "digital born" companies are those that have little to no legacy IT: they have been created during the emergence of cloud systems, and will already be using a mix of on-demand systems such as Microsoft Office 365, Amazon Web Services, Google and so on.

By identifying this basic mix of usage types, Tucci believes that not only EMC II, but the whole of The Federation will be able to better focus its efforts in maintaining current customers while bringing on board new ones.

I have to say that, on the whole, I agree.  EMC Corp is showing itself to be remarkably astute in its acquisitions, in how it is integrating these to create new offerings and in how it is changing from a "buy Symmetrix and we have you" company to a "what is the best system for your organisation?" one.

However, I believe that there are two major stumbling blocks.  The first is that perennial problem for vendors - the channel.  Using a pretty basic rule of thumb, I would guess that around 5% of EMC Corp's channel gets the new EMC and can extend it to push the new offerings through to the customer base.  A further 20% can be trained in a high-touch model to be capable enough to be valuable partners.  The next 40% will struggle - many will not be worth putting any high-touch effort into, as the returns will not be high enough, yet they constitute a large part of EMC Corp's volume into the market.  At the bottom, we have the 35% who are essentially box-shifters, and EMC Corp has to decide whether to put any effort into these.  To my mind, the best thing would be to work on ditching them: the capacity for such channel to spread confusion and problems in the market outweighs the margin on the revenues they are likely to bring in.

This gets me back to The Federation.  When Tucci talked about this last year, I struggled with the concept.  His thrust was that EMC Corp research had shown that any enterprise technical shopping list has no more than 5 vendors on it.  By using a Federation-style approach, he believed that any mix of the EMC, VMware and Pivotal companies could be seen as being one single entity.  I didn't, and still do not buy this.

However, Paul Maritz, CEO of Pivotal put it across in a way that made more sense.  Individuals with the technical skills that EMC Corp require could go to a large monolith such as IBM.  They would be compensated well; would have a lot of resources at their disposal; they would be working in an innovative environment.  However, they would still be working for a "general purpose" IT vendor.  By going to one of the companies in EMC Corp's Federation, in EMC II they are working for a company that specialises in storage technologies; if they go to VMware, they are working for a virtualisation specialist; for Pivotal, a big data specialist - and each has its own special culture. For many individuals, this difference is a major one. 

Sure, the devil remains in the detail, and EMC Corp is seeing a lot of new competition coming through into the market.  However, to my mind it is showing a good grasp of the problems it is facing and a flexibility and agility that belies the overall size and complexity of its corporate structure and mixed portfolio.

I await next year's event with strong interest.

Enhanced by Zemanta

Finding new containers for the BYOD genii

Rob Bamforth | No Comments
| More

Many headline IT trends are driven by organised marketing campaigns and backed by industry players with an agenda - standards initiatives, new consortia, developer ecosystems - and need a constant push, but others just seem to have a life of their own.


BYOD - bring your own device - is one such trend. There is no single group of vendors in partnership pushing the BYOD agenda; in fact most are desperately trying to hang onto its revolutionary coattails.  They do this in the face of IT departments around the world who are desperately trying to hang on to some control.


BYOD is all about 'power to the people' - power to make consumer-led personal choices and this is very unsettling for IT departments that are tasked with keeping the organisations resources safe, secure and productive.


No wonder that according to Quocirca's recent research from 700 interviews across Europe, over 23% only allow BYOD in exceptional circumstances, and a further 20% do not like it but feel powerless to prevent it. Even among those organisation that embrace BYOD, most still limit it to senior management.


This is typical of anyone faced by massive change; shock, denial, anger and confusion all come first and must be dealt with before understanding, acceptance and exploitation take over.


IT managers and CIOs have plenty to be shocked and confused about. On the one hand, they need to empower the business and avoid looking obstructive, but on the other, there is a duty to protect the organisation's assets. Adding to the confusion, vendors from all product categories have been leaping on the popularity of the BYOD bandwagon and using it as a way to market their own products.


The real challenge is that many of the proposed 'solutions' perpetuate a myth about BYOD that is unfortunately inherent in its name, but also is damaging the approach taken to addressing the issues BYOD raises.


The reality is that this is not and should not be centred around the devices or who owns them, but on the enterprise use to which they are put.


The distinction is important for a number of reasons.


First, devices. There are a lot to choose from already today,  with different operating systems, in different form factors - tablets, smartphones etc. - and there is no reason to think this is going to get any simpler. If anything, with wearable technologies such as smart bands, watches and glasses already appearing, the diversity of devices is going to become an even bigger challenge.


Next, users. What might have started as an 'I want' (or even an "I demand") from a senior executive, soon becomes an 'I would like' from knowledge workers, who now appear to be the vanguard for BYOD requests. But this is only the start as the requirement moves right across the workforce. Different roles and job responsibilities will dictate that different BYOD management strategies will have to be put in place. Simply trying to manage devices (or control choices) will not be an option.


Those who appear to be embracing rather than trying to deny BYOD in their organisations understand this. Their traits are that they tend to recognise the need to treat both tablets and smartphones as part of the same BYOD strategy and they are already braced for the changes that will inevitably come about from advances in technology.


Most crucially, however, they recognise the importance of data.


Information security is the aspect of BYOD most likely to keep IT managers awake at night - it is a greater concern than managing the devices themselves or indeed the applications they run.


The fear of the impact of data security, however, seems to have created a 'deer in the headlights' reaction rather than galvanising IT into positive action. Hence the tendency to try to halt or deny BYOD in pretty much the same way that in the past many tried to stem the flow towards internet access, wireless networks and pretty much anything that opens up the 'big box' that has historically surrounded an organisation's digital assets.


Most organisations would do far better to realise that the big box approach is no longer valid, but that they can shrink the concept down to apply 'little boxes' or bubbles of control around their precious assets. This concept of containerisation or sandboxing is not new, but still has some way to go in terms of adoption and widespread understanding.


Creating a virtual separation between personal and work environments allows the individual employee to get the benefit of their own device preferences, and for the organisation to apply controls that are relevant and specific to the value and vulnerability of the data.


With the right policies in place this can be adapted to best-fit different device types and user profiles. Mobile enterprise management is still about managing little boxes, but virtual ones filled with data, not the shiny metal and plastic ones in the hands of users.


For more detailed information about getting to grips with BYOD, download our free report here

Enhanced by Zemanta

Print security: The cost of complacency

Louella Fernandes | No Comments
| More

Quocirca research reveals that enterprises place a low priority on print security despite over 60% admitting that they have experienced a print-related data breach.

Any data breach can be damaging for any company, leaving it open to fines and causing damage to its reputation and undermining customer confidence. In the UK alone, the Ponemon Institute estimates that in 2013, the average organisational cost to a business suffering a data breach is now £2.04m, up from £1.75m in the previous year.

As the boundaries between personal and professional use of technology become increasingly blurred, the need for effective data security has never been greater. While many businesses look to safeguard their laptops, smartphones and tablets from external and internal threats, few pay the same strategic attention to protecting the print environment. Yet it remains a critical element of the IT infrastructure. Over 75% of enterprises in a recent Quocirca study indicating that print is critical or very important to their business activities.

The print landscape has changed dramatically over the past decade. Local single function printers have given way to the new breed of networked multifunction peripherals (MFPs). With print, fax, copy and advanced scanning capabilities, these devices have evolved to become sophisticated document capture and processing hubs.

While they have undoubtedly brought convenience and enhanced user productivity to the workplace, they also pose security risks. They have built in network connectivity, along with hard disk and memory storage, MFPs are susceptible to many of the same security vulnerabilities as any networked device.

Meanwhile, the move to a centralised MFP environment means more users are sharing devices.  Without controls, documents can be collected by unauthorised users - either accidentally or maliciously. Similarly, confidential or sensitive documents can be routed in seconds to unauthorised recipients, through scan to email, scan to file and scan to cloud storage functionality. Further controls are required as  employees print more and more direct from mobile devices.

Yet many enterprises are not taking heed. Quocirca's study revealed that just 22% place a high priority on securing their print infrastructure. While financial and professional services sector consider print security a much higher priority, counterparts in the retail, manufacturing and the public sectors lag way behind.

Such complacency is misplaced. Overall 63% admitted they have experienced a print-related data breach. An astounding 90% of public sector respondents admit to one or more paper-based data breaches.

So how can businesses minimise the risks? Fortunately thereare simple and effective approaches to protecting the print infrastructure. These methods not only enhance document security, but also promote sustainable printing practices - reducing paper wastage and costs.

1. Conduct a security assessment

For enterprises with a large and diverse printer fleet, it is advisable to use a third party provider to assess device, fleet and enterprise document security. This can evaluate all points of vulnerability across a heterogeneous fleet and provide a tailored security plan, for devices, user access and end of life/disposal. Managed print service (MPS) providers commonly offer this as part of their assessment services.

2. Protect the device.

Many MFPs come as standard with hard drive encryption and data overwrite features. Most also offer lockable and removable hard drives. Data overwriting ensures that the hard drive is clear of readable data when the device is disposed of. It works by overwriting the actual data with random and numerical characters. Residual data can be completely erased when the encrypted device and the hard disk drive are removed from the MFP.

3. Secure the network

MFP devices can make use of several protocols and communication methods to improve security. The most common way of encrypting print jobs is SSL (secure socket layer) makes it safe for sensitive documents to be printed via a wired or wireless network. Xerox, for instance, has taken MFP security a step further by including McAfee Embedded Control technology which uses application whitelisting technology to protect its devices from corrupt software and malware.

4. Control access

Implementing access controls through secure printing ensures only authorised users are able to access MFP device functionality. Also known as PIN and pull printing, print jobs can be saved electronically on the device, or on an external server, until the authorised user is ready to print them. The user provides a PIN code or uses an alternative authentication method such as a swipe card, proximity card or fingerprint. As well as printer vendor products there a range of third party products including Capella's MegaTrack, Jetmobile's SecureJet, Equitrac's Follow-You and Ringdale's FollowMe, all of which are compatible with most MFP devices.

5. Monitor and audit

Print environments are often a complex and diverse mix of products and technologies, further complicating the task of understanding what is being printed, scanned and copied where and by whom. Enterprises should use centralised print management tools to monitor and track all MFP related usage. This can either be handled in-house or through an MPS provider.

With MFPs increasingly becoming a component of document distribution, storage and management, organisations need to manage MFP security in the same way as the rest of the IT infrastructure. By using the appropriate level of security for their business needs, an organisation can ensure that it's most valuable asset--corporate data--is protected.

Read Quocirca's report A False Sense of Security

Enhanced by Zemanta

Internet of Things - Architectures of Jelly

Rob Bamforth | No Comments
| More

In today's world of acronyms and jargon, there are increasing references to the Internet of things (IoT), machine to machine (M2M) or a 'steel collar' workforce. It doesn't really matter what you call it, as long as you recognise it's going to be BIG. That is certainly the way the hype is looking - billions of connected devices all generating information - no wonder some call it 'big data', although really volume is only part of the equation.


Little wonder that everyone wants to be involved in this latest digital gold rush, but let's look a little closer at what 'big' really means.


Commercially it means low margins. The first wave of mobile connectivity - mobile email - delivered to a device like a BlackBerry, typically carried by a 'pink collar' executive (because they bought their stripy shirts in Thomas Pink's in London or New York) was high margin and simple. Mobilising white-collar knowledge workers with their Office tools was the next surge, followed by mobilising the mass processes and tasks that support blue-collar workers.


With each wave volumes rise, but so too do the challenges of scale - integration, security and reliability - whilst the technology commoditises and the margins fall. Steel collar will only push this concept further.


Ok, but the opportunity is BIG, so what is the problem?


The problem is right there in the word 'big'. IoT applications need to scale - sometimes preposterously - so much so that many of the application architectures that are currently in place or being developed are not adequately taking this into account.


Does this mean the current crop of IoT/M2M platforms are inadequate?


Not really, as the design fault is not there, but generally further up in the application architectures. IoT/M2M platforms are designed to support the management and deployment of huge numbers of devices, with cloud, billing and other services that support mass rollouts especially for service providers.


Reliably scaling the data capture and its usage is the real challenge, and if or when it goes wrong, "Garbage in, Garbage out" (GiGo) will be the least of all concerns.


Several 'V's are mentioned when referring to big data; volume of course is top of mind (some think that's why it's called 'big' data), generally followed by velocity for the real-timeliness and trends, then variety for the different forms or media that will be mashed together. Sneaking along in last but one place is the one often forgotten, but without which the whole of the final 'V' - value - is lost - veracity. It has to be accurate, correct and complete.


When scaling to massive numbers of chattering devices, poor architectural design will mean that messages are lost, packets dropped and the resulting data may be not quite right.


Ok, so my fitness band lost a few bytes of data, big deal, even if a day is lost, right? Or my car tracking system skipped a few miles of road - what's the problem?


It really depends on the application, how it was architected and how it deals with exceptions and loss. This is not even a new problem in the world of connected things - supervisory control and data acquisition (SCADA) - that has been in existence since well before the internet and its things.


The recent example of problem data from mis-aligned electro-mechanical electricity meters in the UK shows just how easy this can happen, and how quickly the numbers can get out of hand. Tens of thousands of precision instruments had inaccurate clocks, but consumers and supplier alike thought they were ok, until a retired engineer discovered a fault in his own home that led to the unearthing that thousands of people had been overcharged for their electricity.


And here is the problem, it's digital now and therefore perceived to be better; companies think the data is ok, so they extrapolate from it and base decisions on it, and in the massively connected world of IoT, so perhaps does everyone else. The perception of reality overpowers the actual reality.


How long ago did your data become unreliable; do you know, did you check, who else has made decisions based on it? The challenge of car manufacturers recalling vehicles will seem tiny compared to the need for terabyte recalls.


Most are rightly concerned about the vulnerability of data on the internet of people and how that will become an even bigger problem with the internet of things. However, that aside, there is a pressing need to get application developers thinking about resilient, scalable and error-correcting architectures, otherwise the IoT revolution could have collars of lead, not steel and its big data could turn out to be really big GiGo.


Enhanced by Zemanta






Managing a PC estate

Clive Longbottom | No Comments
| More

Although there is much talk of a move towards virtual desktops, served as images from a centralised point, for many organisations, the idea does not appeal.  Whatever the reason (and there may be many as a previous blog here points out), staying with PCs leaves the IT department with a headache - not least an estate of decentralised PCs that need managing.

Such technical management tends to be the focus for IT; however, for the business, there are a number of other issues that also need to be considered.  Each PC has its own set of applications.  The majority of these should have been purchased and installed through the business, but many may have been installed directly by the users themselves, something you may want to avoid, but is nowadays an expectation of many IT users.

This can lead to problems: some applications may not be licenced properly (for example, a student licence not permitted for use in a commercial environment); it may contain embedded malware (a recent survey has shown that much pirated software contains harmful payloads including keyloggers); it definitely opens up an organisation to considerable fines should a unlicensed software be present and a software audit be carried out by an external body.

Locking down desktops is increasingly difficult. Employees are getting very used to self-service through their use of their own devices, and expect this within a corporate environment.  Centralised control of desktops is still required - even if virtual desktops are not going to be the solution of choice.

The first action your organisation should take is for a full audit.  You need to fully understand how many PCs there are out there; what software is installed and whether that software is being used or not.  You need to know how many licences for software you have in place and how those can be utilised - for example, are they concurrent licences (a fixed number of people can use them at the same time), or named seat licences (only people with specific identities can use them).

This will help to identify software that your organisation was not aware of, and can also help in identifying unused software sitting idle on PCs.

You can then look at creating an image that contains a copy of all the software that is being used by people to run the business.  Obviously, you do not want every user within your organisation to have access to every application, so something is needed to ensure that each person can be tied in by role or name to a list of software to which they should have access.

Through the installation of an agent on each PC, it should then be possible to apply centralised control over what is happening.  That single golden image containing all allowable applications can then be called upon by that agent as required.  The user gets to see all the applications that they are allowed to access (by role and/or individual policy), and a virtual registry can be created for their desktop.  Should anything happen to that desktop (machine failure, disk corruption, whatever), a new environment can be rapidly built against a new machine.

If needed, virtualisation can be used to hive off a portion of the machine for such a corporate desktop - the user can then install any applications that they want to within the rest of the device.  Rules can be applied to prevent data crossing the divide between the two areas, keeping a split between the consumer and corporate aspects of the device - a great way of enabling laptop-based bring your own device (BYOD).

As with most IT, the "death" of any technology will be widely reported and overdone: VDI does not replace desktop computing for many.  However, centralised control should still be considered - it can make management of an IT estate - and the information across that estate - a lot easier.

This blog first appeared on FSlogix' site at http://blog.fslogix.com/managing-a-pc-estate 

Enhanced by Zemanta

Web security 3.0 - is your business ready?

Bob Tarzey | No Comments
| More

As the web has evolved so have the security products and services that control our use of it. In the early days of the "static web" it was enough to tell us which URLs to avoid because the content was undesirable (porn etc.) As the web became a means distributing malware and perpetrating fraud, there was a need to identify bad URLs that appeared overnight or good URLs that had gone bad as existing sites were compromised. Early innovators in this area included Websense (now a sizable broad-base security vendor) and two British companies SurfControl (that ended up as part of Websense) and ScanSafe that was acquired by Cisco.

 

Web 2.0

These URL filtering products are still widely used to control user behaviour (for example, you can only use Facebook at lunch time) as well as block dangerous and unsavoury sites. They rely on up to date intelligence about all the URLs out there and their status. Most of the big security vendors have capability in this area now. However, as the web became more interactive (for a while we all called this Web 2.0) there was a growing need to be able to monitor the sort of applications that were being accessed via the network ports typically used for web access; port 80 (for HTTP) and port 443 (for HTTPS). Again this was about controlling user behaviour and blocking malicious code and activity.

 

To achieve this firewalls had to change; enter the next generation firewall. The early leader in this space was Palo Alto Networks. The main difference with its firewall was that it was application aware with a granularity that could work within a specific web site (for example, applications running on Facebook). Just as with the URL filtering vendors, next generation firewalls rely on application intelligence, the ability to recognise a given application by its network activity and allow or block it according to user type, policy etc. Palo Alto Networks built up its own application intelligence, but there were other databases, such as FaceTime (a vendor that found itself in a name dispute with Apple) which was acquired by Check Point as it upgraded its firewalls. Other vendors including Cisco's Sourcefire, Fortinet and Dell's SonicWALL have followed suit.

 

The rise of shadow IT

So with URLs and web applications under control, is the web is a safer place? Well yes, but the job is never done. A whole new problem has emerged in recent years with the increasing ability for users to upload content to the web. The problem has become acute as users increasingly provision cloud services over the web for themselves (so called shadow IT). How do you know which services are OK to use? How do you even know which ones are in use? Again this is down to intelligence gathering, a task embarked on by Skyhigh Networks in 2012.

 

Skyhigh defines a cloud service as anything that has the potential to "exfiltrate data"; so this would include Dropbox and Facebook, but not the web sites of organisations such as CNN and the BBC. Skyhigh provides protection for businesses, blocking its users from accessing certain cloud services based on its own classification (good, medium, bad) providing a "Cloud Trust" mark (similar to what Symantec's Verisign does for websites in general). As with URL filtering and next generation firewalls, this is just information, rules about usage need to be applied. Indeed, Skyhigh can provide scripts to be applied to firewalls to enforce rules around the use of cloud services.

 

However, Skyhigh cites other interesting use cases. Many cloud services of are of increasing importance to businesses; LinkedIn is used to manage sales contacts, Dropbox, Box and many other sites are used to keep backups of documents created by users on the move. Skyhigh gives businesses insight into their use, enables it to impose standards and, where subscriptions are involved, allows usage to be aggregated into to single discounted contracts rather than being paid for via expenses (which is often a cost control problem with shadow IT). It also provides enterprise risk scores for a given business based on its overall use of cloud services.

 

Beyond this, Skyhigh can assert controls over those users working beyond the corporate firewall, often on their own devices. For certain cloud services for which access is provided by the business (think salesforce.com, ServiceNow, SuccessFactors etc.), without need for an agent, usage is forced back via Skyhigh's reverse proxy so that usage is monitored and controls enforced. Skyhigh can also recognise anomalous behaviour with regard to cloud services and thus provide an additional layer of security against malware and malicious activity.

 

Skyhigh is the first to point out that it is not an alternative to web filtering and next generation firewalls but complimentary to them. Skyhigh, which mostly provides it service on-demand, is already starting to co-operate with existing vendors to enhance their own products and services through partnerships. So your organisation may be able to benefit from its capabilities via an incremental upgrade from an existing supplier rather a whole new engagement. So, that is web security 3.0; the trick is to work out what's next - roll on Web 4.0!

 

Enhanced by Zemanta

Two areas where businesses can learn from IT

Bob Tarzey | No Comments
| More

Many IT industry commentators (not least Quocirca) constantly hassle IT managers to align their activities more closely with those of the businesses they serve; to make sure actual requirements are being met. However, that does not mean that lines of business can stand aloof from IT and learn nothing from the way their IT departments manage their own increasingly complex activities. Two recent examples Quocirca has come across demonstrate this.

 

Everyone needs version control

First, take the tricky problem of software code version control. Many outside of IT will be familiar with the problem, at least at a high level, through the writing and review of documents. For many this is a manual process carried out at the document name level; V1, V1.1, V1.1A, V2.01x etc. Content management (CM) systems, such as EMC's Documentum and Microsoft's SharePoint can improve things a lot, automating versioning, providing checking and checkout etc. (but they can be expensive to implement across the business).

 

With software development the problem is a whole lot worse, the granularity of controls needs to be down to individual lines of code and there are multiple types of entities involved; the code itself, usually multiple files linked together by build scripts (another document), binary files that are actually deployed in test and then live environments, documentation (user guides etc.), third party/open source code that is included and so on. As result the version control systems from vendors such as Serena, IBM Rational and a number of open source systems that have been developed over the years to support software development are very sophisticated.

 

In fairly technical companies, where software development is a core activity, the capability of these systems is so useful that it has spread well beyond the software developers themselves. Perforce Software, another well-known name in software version control, estimates that 68% of its customers are storing non-software assets in its version control system. It customers include some impressive names with lots of users, for example; salesforce.com, NYSE, Netflix and Samsung.

 

To capitalise on this increasing tendency of its customers to store non-IT assets Perforce has re-badged its system as Perforce Commons and made it available as an online service as well as being available for on-premise deployment. All the functionality developed can be used for the management of whole range of other business assets. With the latest release this now includes merging Microsoft PowerPoint and Word documents and checking for difference between various versions of the same document. Commons also keeps a full audit trail of document changes, which is important for compliance in many document-based work flows.

 

Turing up the Heat in ITSM

The second area where Quocirca has seen IT management tools being used beyond the business is in IT service management (ITSM). FrontRange's Heat tool is traditionally used for handing support incidents related to IT assets raised by users (PCs, smartphones, software tools etc.) However, increasingly its use is being extended beyond IT to other departments, for example to manage incidents relating to customer service calls, human resources (HR) issues, facilities management (FM) and finance department requests. Heat is also available as an on-demand service as well as an on-premise tool, in many cases deployments are a hybrid of the two.

 

Of course, there are specialist tools for CM, HR, FM and so on; specially designed for the job with loads of functionality. However, with budgets and resources stretched, IT departments that already use tools such as Perforce version management and Heat ITSM can quickly add value to whole new areas of the business with little extra cost. Others, that are not already customers, may be able to kill several birds with one stone as they seek to show the business that IT can deliver extra beyond its own interests with little incremental cost.

 

Enhanced by Zemanta

4.7 Million NTP Servers Ready To Boost DRDoS Attack Volumes

Bernt Ostergaard | No Comments
| More

The US Internet securty organisation CERT has published a warning of increasing DRDoS (Distributed Reflection and amplification DDoS) attacks using Internet Service Providers' NTP (Network Time Protocol) servers (http://www.kb.cert.org/vuls/id/348126). According to their analysis NTP is the second most widely used vehicle for DDoS attacks (after DNS). In plain language that means, if I want to take a victim web site down, I can send a spoofed message to a vulnerable ISP NTP server and get it to send a response that is several thousand times longer to my intended victim! That is amplification in action.

A request could look like this:

ntpq -c rv [ip]

The payload is 12 bytes which is the smallest payload that will illicit a mode 6 response. The response from a delinquent ISP could be this:

associd=0 status=06f4 leap_none, sync_ntp, 15 events, freq_mode, version="ntpd 4.2.2p1@1.1570-o Tue Dec  3 11:32:13 UTC 2013 (1)", processor="x86_64", system="Linux/2.6.18-371.6.1.el5", leap=00,stratum=2, precision=-20, rootdelay=0.211, rootdispersion=133.057, peer=39747, refid=xxx.xxx.xxx.xxx, reftime=d6e093df.b073026f  Fri, Mar 28 2014 19:35:43.689, poll=10, clock=d6e09aff.0bc37dfd  Fri, Mar 28 2014 20:06:07.045, state=4, offset=-17.031, frequency=-0.571, jitter=5.223, noise=19.409, stability=0.013, tai=0

 

That is only a 34 times amplification. Crafting the request differently could boost attack volumes by up to 5500 times according to CERT. The response shows that this ISP last updated its NTP software in Dec 2013 to version 4.2.2p1. The CERT recommendation is that NTP server should at least be using version 4.2.7p26.

So, how widespread is the problem? The Shadowserver Foundation performs on-going monitoring of the problem - and at present, it has discovered 4.7 million vulnerable NTP servers across the globe (https://ntpscan.shadowserver.org). So this ISP is part of a very large delinquent group of ISPs. The Shadowserver monitoring activity also clearly shows that the problem is most severe in the US followed by Europe, Japan, South Korea, Russia and China in that order.

Screen Shot 2014-04-02 at 13.34.14.png

The global map from Shadowserver.org shows the distribution of vulnerable NTP servers - yellow shows the highest density.

Responsibility for Internet safety is clearly a shared responsibility involving all user groups, which means that we as users need to keep our service providers on their toes, and Shadowserver enshrines this principle. It is a volunteer group of professional Internet security workers that gathers, tracks and reports on malware, botnet activity and electronic fraud. It aims to improve the security of the Internet by raising awareness of the presence of compromised servers, malicious attackers and the spread of malware. In this respect, I would like to 'amplify' their message.


The Top 3 Barriers to VDI

Clive Longbottom | No Comments
| More

The use of server-based desktops, often referred to as a virtual desktop infrastructure (VDI), makes increasing sense for many organisations.  Enabling greater control over how a desktop system is put together; centralising management and control of the desktops as well as the data created by the systems; helping organisations to embrace bring your own device (BYOD) and enhancing security are just some of the reasons why more organisations are moving toward the adoption of VDI.

However, in Quocirca's view, there remain some major issues in VDI adoption.  Our "Top 3" are detailed here:

  •  Management.  Imagine that you are tasked with managing 1,000 desktops.  Your OS vendor pushes out a new security patch.  You have to patch 1,000 desktops.  With VDI, at least you do not have to physically visit 1,000 desks, right?  Maybe so - but it is still an issue, and with application updates coming thick and fast and the possibility that one single patch could cause problems with some proportion of the VDI estate puts many IT departments off such updates for fear of causing problems, so leading to sub-optimised desktops and possible security issues.
  • Licencing.  The phantom of better control when the desktops are all in one place in the datacentre can soon become less believable.  Unless solid controls and capable management tools are in place, the number of orphan (unused but live) images can rapidly get out of control.  Desktops belonging to people who have left the company do not get deleted; test images get spun up and forgotten about; copies of images get made and forgotten about.  Each of these images - as well as using up valuable resource - needs to be licenced.  Each requires an operating system licence along with all the application licences that are live within that hot image, even though it is not being used. Many organisations go for a costly site licence to avoid this issue rather than attempting to deal with it.
  • Storage costs.  The move from local OS and application storage on the desktop PC to the data centre can be expensive.  Old-style enterprise storage, such as a SAN or dedicated high-performance storage arrays, has high capital and maintenance costs. A more optimised use of, for example, newer virtualised direct attached storage, virtual storage areas networks or software-defined storage (SDS) approaches using underlying cheaper storage and compute arrays from vendors such as Coraid or Nutanix can provide the desired performance while keeping costs under control.

So, does this mean that VDI is more trouble than it is worth?  Not if it is approached in the right way.  The use of "golden images", where as few a number as possible of images are held, may hold the key.

Many VDI vendors that push this approach will start off with maybe four or five main golden images - one for task workers, one for knowledge workers, one for special cases and one for executives, say - but will then still face the problems of having these spin up and staying live on a per-user basis.  Managing the images still requires either patching all live images, or patching the golden images and forcing everyone to refresh their desktops by powering them down and back up again - not much easier than with physical desktops.  Dealing with leavers still needs physical processes to be in place, otherwise licencing still becomes an issue.

A better approach is to use a single golden image that can be used to automatically provision desktops on a per-user basis and automatically manages how software is made available and managed on an on-going basis.  This requires an all-embracing golden image: it needs a copy of every application that will be used within the organisation present in the image - and it needs a special means of dealing with how these applications are provisioned or not as the case may be, to manage the licensing of each desktop. By virtualising the desktop registry and linking this through to role and individual policies in Active Directory, this can be done: as each user utilises their own desktop, it can dynamically be managed through a knowledge of what the virtual registry holds and what rights they have as a user.

The data held in the virtual registry also enables closer monitoring and auditing of usage to be made: by looking at usage statistics, orphan images can be rapidly identified and closed down as required.  Unused licences can be harvested and put back into the pool for others to use - or can be canned completely and used to lower licencing costs with the vendor.

VDI is not the answer for everyone, but it is an answer that meets many organisations' needs in a less structured world.  If you have been put off VDI in the past for any of the reasons discussed above, then maybe it is time to reconsider.  Today's offerings in the VDI market are considerably different to what was there only a couple of years back.

This item was first posted on the FSLogix blog at http://blog.fslogix.com/ 

Enhanced by Zemanta

Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.

Archives

Recent Comments

  • David Chassels: Hi Clive Is the business emphasis not wrong in looking read more
  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more

Dilbert

 

 

-- Advertisement --