Do increasing worries about insider threats mean it is time to take another look at DRM?

Bob Tarzey | No Comments
| More

The encryption vendor SafeNet publishes a Breach Level Index which records actual reported incidents of data loss. Whilst the number of losses attributed to malicious outsiders (58%) exceeds those attributed to malicious insiders (13%), SafeNet claims that insiders account for more than half of the actual information lost. This is because insiders will also be responsible for all the accidental losses that account for a further 26.5% of incidents and the stats do not take into account the fact that many breaches caused by insiders will go unreported. The insider threat is clearly something that organisations need to guard against to protect their secrets and regulated data.

Employees can be coached to avoid accidents and technology can support this. Intentional theft is harder to prevent, whether it is for reasons of personal gain, industrial espionage or just out of spite. According to Verizon's Data Breach Investigations Report, 70% of the thefts of data by insiders are committed within thirty days of an employee resigning from their job, suggesting they plan to a take data with them to their new employer. Malicious insiders will try to find a way around the barriers put in place to protect data; training may even serve to provide useful pointers about how to go about it.

Some existing security technologies have a role to play in protecting against the insider threat. Basic access controls built into data stores, linked to identity and access (IAM) management systems are a good starting point, encryption of stored data strengthens this helping to ensure only those with the necessary rights can access data in the first place. In addition, there have been many implementations of data loss prevention (DLP) systems in recent years; these monitor the movement of data over networks and alert when content is going somewhere it shouldn't and, if necessary, blocks it.

However, if a user has the rights to access data, and indeed to create it in the first place, then these systems do not help, especially if the user is to be trusted to use that data on remote devices. To protect data at all times controls must extend to wherever the data is. It is to this end that renewed interest is being taken in digital rights management (DRM). In the past issues such as scalability and user acceptance have held many organisations back from implementing DRM. That is something DRM suppliers such as Fasoo and Verdasys have sought to address.

DRM, as with DLP, requires all documents to be classified from the moment of creation and monitored throughout their life cycle. With DRM user actions are controlled through an online policy server, which is referred to each time a sensitive document is accessed. So, for example, a remote user can be prevented from taking actions on a given document such as copying or printing; documents can only be shared with other authorised users. Most importantly an audit trail of who has done what to a document, and when, is collected and managed at all stages.

Just trusting employees would be cheaper and easier than implementing more technology. However, it is clear that this is not a strategy businesses can move forward with. Even if they are prepared to take risk with their own intellectual property regulators will not accept a casual approach when it comes to sensitive personal and financial data. If your organisation cannot be sure what users are doing with its sensitive data at all times, perhaps it is time to take a look at DRM.

Quocirca's report "What keeps your CEO up at night? The insider threat: solved with DRM", is freely available here.

Top 10 characteristics of high performing MPS providers

Louella Fernandes | No Comments
| More

Quocirca's research reveals that almost half of enterprises plan to expand their use of managed print services (MPS). MPS has emerged as a proven approach to reducing operational costs and improving the efficiency and reliability of a business's print infrastructure at a time when in-house resources are increasingly stretched. 

Typically, the main reasons organisations turn to MPS are cost reduction, predictability of expenses and service reliability. However they may also benefit from implementation of solutions such as document workflow, mobility and business process automation, to boost collaboration and productivity among their workforce. MPS providers can also offer businesses added value through transformation initiatives that support revenue and profit growth. MPS providers include printer/copier manufacturers, systems integrators and managed IT service providers. As MPS evolves and companies increase their dependence on it, whatever a provider's background, it's important that they can demonstrate their credibility across a range of capabilities. The following are key criteria to consider when selecting an MPS provider:

  1. Strong focus on improving customer performance - In addition to helping customers improve the efficiency of their print infrastructure, leading MPS providers can help them drive transformation and increase employee productivity as well as supporting revenue growth.  An MPS provider should understand the customer's business and be able to advise them on solutions that can be implemented to improve business performance, extend capabilities and reach new markets.
  2. A broad portfolio of managed services - Many organisations may be using a variety of providers for their print and IT services. However managing multiple service providers can also be costly and complex. For maximum efficiency, look for a provider with a comprehensive suite of services which cover office and production printing, IT services and business process automation.  As businesses look more to 'as-a-service' options for software implementation,consider MPS providers with strong expertise across both on-premise and cloud delivery models.
  3. Consistent global service delivery with local support - Global delivery capabilities offer many advantages, including rapid implementation in new locations and the ability to effectively manage engagements across multiple countries. However it's also important that a provider has local resources with knowledge of the relevant regulatory and legal requirements. Check whether an MPS provider uses standard delivery processes across all locations and how multi-location teams are organised and collaborate.
  4. Proactive continuous improvement - An MPS provider must go beyond a break/fix model to offer proactive and pre-emptive support and maintenance. As well as simple device monitoring they should offer advanced analytics that can drive proactive support and provide visibility into areas for on-going improvement.
  5. Strong multivendor support - Most print infrastructures are heterogeneous environments comprising hardware and software from a variety of vendors, so MPS providers should have proven experience of working in multivendor environments. A true vendor-agnostic MPS provider should play the role of trusted technology advisor, helping an organisation select the technologies that best support their business needs. Independent MPS providers should also have partnerships with a range of leading vendors, giving them visibility of product roadmaps and emerging technologies.
  6. Flexibility - Businesses will always want to engage with MPS in a variety of different ways. Some may want to standardise on a single vendor's equipment and software, while others may prefer multivendor environments. Some may want a provider to take full control of their print infrastructure while others may only want to hand over certain elements. And some may want to mix new technology with existing systems so they can continue to leverage past investments. Leading MPS providers offer flexible services that are able to accommodate such specific requirements. Flexible procurement and financial options are also key, with pricing models designed to allow for changing needs.
  7. Accountability - Organisations are facing increased accountability demands from shareholders, regulators and other stakeholders. In turn, they are demanding greater accountability from their MPS providers. A key differentiator for leading MPS providers is ensuring strong governance of MPS contracts, and acting as a trusted, accountable advisor, making recommendations on the organisation's technology roadmap. MPS providers must be willing to meet performance guarantees through contractual SLAs, with financial penalties for underperformance. They should also understand the controls needed to meet increasingly complex regulatory requirements.
  8. Full service transparency - Consistent service delivery is built on consistent processes that employ a repeatable methodology. Look for access to secure, web-based service portals with dashboards that provide real-time service visibility and flexible reporting capabilities.
  9. Alignment with standards - An MPS provider should employ industry best practices, in particular aligning with the ITIL approach to IT service management. ITIL best practices encompass problem, incident, event, change, configuration, inventory, capacity and performance management as well as reporting.
  10. Innovation - Leading MPS providers demonstrate innovation. This may include implementing emerging technologies and new best practices and continually working to improve service delivery and reduce costs. Choose a partner with a proven track record of innovation. Do they have dedicated research centres or partnerships with leading technology players and research institutions? You should also consider how a prospective MPS provider can contribute to your own company's innovation and business transformation strategy. Bear in mind that innovation within any outsourcing contract may come at a premium - this is where gain-sharing models may be used.

Ultimately, businesses are looking for more than reliability and cost reduction from their MPS provider. Today they also want access to technologies that can increase productivity and collaboration and give them a competitive advantage as well as help with business transformation. By ensuring a provider demonstrates the key characteristics above before committing, organisations can make an informed choice and maximise the chances of a successful engagement. Read Quocirca's Managed Print Services Landscape, 2014

What is happening to the boring world of storage?

Clive Longbottom | No Comments
| More

Storage suddenly seems to have got interesting again.  As the interest moves from increasing spin speeds and applying ever more intelligent means to get a disk head over the right part of a disk in the fastest possible time to flash-based systems where completely different approaches can be taken, a feeding frenzy seems to be underway.  The big vendors are in high-acquisition mode, while the new kids on the block are mixing things up and keeping the incumbents on their toes.

After the acquisitions of Texas Memory Systems (TMS) by IBM, Whiptail by Cisco and XtremIO by EMC in 2012, it may have looked like it was time for a period of calm reflection and the full integration of what they had acquired.  However, EMC acquired ScaleIO and then super-stealth server-side flash company DSSD to help it create a more nuanced storage portfolio capable of dealing with multiple different workloads on the same basic storage architecture.

Pure Storage suddenly popped up and signed a cross-licensing and patent agreement with IBM, with Pure acquiring over 100 storage and related patents from IBM, with Pure stating that this was a defensive move to protect itself from any patent trolling by other companies (or shell companies).  However, it is also likely that IBM will gain some technology benefits from the cross-licensing deal.  At the same time as the IBM deal, Pure also acquired other patents to bolster its position.

SanDisk acquired Fusion-io, another server-side flash pioneer.  More of a strange acquisition, this one - Fusion-io would have been more of a fit for a storage array vendor looking to extend its reach into converged fabric through PCIe storage cards.  SanDisk will now have to forge much stronger links with motherboard vendors - or start to manufacture its own motherboards - to make this acquisition work well.  However, Western Digital had also been picking up flash vendors, such as Virident (itself a PCIe flash vendor), sTec and VeloBit; Seagate acquired the SSD and PCIe parts of Avago - maybe SanDisk wanted to be seen to be doing something.

Then we have Nutanix: a company that started off as marketing itself as a scale-out storage company but was actually far more of a converged infrastructure player.  It has just inked a global deal with Dell, where Dell will license Nutanix' web-scale software to run on Dell's own converged architecture systems.  This deal gives a massive boost to Nutanix: it gains access to the louder voice and greater reach of Dell, while still maintaining its independence in the market.

Violin Memory has not been sitting still either.  A company that has always had excellent technology based on moving away from the concept of the physical disk drive, it uses a PCI-X in-line memory module approach (which it calls VIMMs) to provide all-flash based storage arrays.  However, it did suffer from being a company with great hardware but little in the way of intelligent software.

After its IPO, it found that it needed a mass change in management staff, and under a new board and other senior management, Quocirca is seeing some massive changes in its approach to its product portfolio.  Firstly, Violin brought the Windows Flash Array (WFA) to market - far more of an appliance than a storage array.  Now, it has launched its Concerto storage management software as part of its 7000 all flash array.  Those who have already bought the 6000 array can choose to upgrade to a Concerto-managed system in-situ.

Violin has, however, decided that PCIe storage is not for it - it has sold off that part of its business to SK Hynix.

The last few months have been hectic in the storage space.  For buyers, it is a dangerous time - it is all too easy to find yourself with high-cost systems that are either superseded and unsupported all too quickly or where the original vendor is acquired or goes bust leaving you with a dead-end system.  There will also be continued evolution of systems to eke out those extra bits of performance, and a buyer now may not be able to deal with these changes through abstracting everything through a software defined storage (SDS) layer.

However, flash storage is here to stay.  At the moment, it is tempting to choose flash systems for specific workloads where you know that you will be replacing the systems within a relatively short period of time anyway. This is likely to be mission critical latency-dependent workloads where the next round of investment in the next generation of low-latency, high-performance storage can be made within 12-18 months. Server-side storage systems using PCIe cards should be regarded as highly niche for the moment: it will be interesting to see what EMC does with DSSD and what Western Digital and SanDisk do with their acquisitions, but for the moment, the lack of true abstraction of PCIe (apart from via software from the likes of PernixData).

For general storage, the main storage vendors will continue to move from all spinning to hybrid and then to all flash arrays over time - it is probably best to just follow the crowd here for the moment.

Cloud infrastructure services, find a niche or die?

Bob Tarzey | No Comments
| More

Back in May it was reported that Morgan Stanley had been appointed to explore options for the sale of hosted? services provider Rackspace. Business Week, May 16th reported the story with the headline Who Might Buy Rackspace? It's a Big List. 24/7 WALLST reported analysis from Credit Suisse that narrowed this to three potential suitors; Dell, Cisco and HP.

To cut a long story short, Rackspace sees a tough future competing with the big three in the utility cloud market; Amazon, Google and Microsoft. Rackspace could be attractive to Dell, Cisco, HP and other traditional IT infrastructure vendors, that see their core business being eroded by the cloud and need to build out their own offerings (as does IBM which has already made significant acquisitions).

Quocirca sees another question that needs addressing. If Rackspace, one of the most successful cloud service providers, sees the future as uncertain in the face of competition from the big three, then what of the myriad of smaller cloud infrastructure providers? For them the options are twofold.

Be acquired or go niche

First achieve enough market penetration to become an attractive acquisition target for the larger established vendors that want to bolster their cloud portfolios. As well as the IT infrastructure vendors this includes communications providers and system integrators.

Many have already been acquisitive in the cloud market. For example the US number three carrier CenturyLink buying Savvis, AppFog and Tier-3 and NTT's system integrator arm Dimension Data added to existing cloud services with OpSource and BlueFire. Other cloud service providers have merged to beef up their presence, for example Claranet and Star.

The second option for smaller provider is to establish a niche, where the big players will find it hard to compete. There are a number of cloud providers that are already doing quite well at this, they rely on a mix of geographic, application or industry specialisation. Here are some examples: 

Exponential E - highly integrated network and cloud services

Exponential-E's background is as a UK focussed virtual private network provider, using its own cross London metro-network and services from BT. In 2010 the vendor moved beyond networking to provide infrastructure-as-a-service. Its differentiator is to embed this into in to its own network services at network level 2 (switching etc.) rather than higher levels. Its customers get the security and performance that would be expected from internal WAN based deployments that cannot be achieved for cloud services accessed over the public internet.

City Lifeline - in finance latency matters

City Lifeline's data centre is shoe-horned in an old building near Moorgate in central London. Its value proposition is low latency for which it charges a premium over out of town premises for its proximity to the big city institutions.

Eduserve - governments like to know who they are dealing with

For reasons of compliance, ease of procurement and security of tenure, government departments in any country like to have some control over their suppliers, this includes the procurement of cloud services. Eduserv is a not for profit long-term supplier of consultancy and managed services to the UK government and charity organisations. In order to help its customers deliver better services, Eduserve has developed cloud infrastructure offerings out of its own data centre in the central south UK town of Swindon. As a UK G-Cloud partner it has achieved IL3 security accreditation enabling it to host official government data. Eduserve provides value added services to help customers migrate to cloud, including cloud adoption assessments, service designs and on-going support and management. 

Firehost - performance and security for payment processing

Considerable rigour needs to go into building applications for processing highly secure data for sectors such as financial services and healthcare. This rigour must also extend to the underlying platform. Firehost has built an IaaS platform to targets these markets. In the UK its infrastructure is co-located with Equinix, ensuring access to multiple high speed carrier connections. Within such facilities, Firehost applies its own cage level physical security. Whilst infrastructure is shared it maintains the feel a private cloud, with enhanced security through protected VMs with built in web application firewall, DDoS protection, IP reputation filtering and two factor authentication for admin access.

Even for these providers the big three do not disappear. In some cases their niche capability may simply see them bolted on to bigger deployments, for example, a retailer off-loading its payment application to a more secure environment. In other cases, existing providers are staring to offer enhanced services around the big three to extend in-house capability, for example UK hosting provider Attenda now offers services around Amazon Web Services (AWS).

For many IT service providers the growing dominance of the big three cloud infrastructure providers, along with the strength of software-as-a-service providers such as salesforce.com, NetSuite and ServiceNow will turn them into service brokers. This is how Dell positioned itself at its analyst conference last week; of course, that may well change if it bought Rackspace.

                                                                        

Cloud orchestration - will a solution come from SCM?

Clive Longbottom | No Comments
| More

Serena Software is a software change and configuration management vendor, right?  It has recently released its Dimensions CM 14 product, with additional functionality driving Serena more into the DevOps space, as well as making life easier for distributed development groups to be able to work collaboratively through synchronised libraries with peer review capabilities.

Various other improvements, such as change and branch visualisation and the use of health indicators to show how "clean" code is and where any change is in a development/operations process, as well as integrations into the likes of Git and Subversion means that Dimensions CM 14 should help many a development team as it moves from an old-style separate development, test and operations system to a more agile, process driven, automated DevOps environment.

However, it seems to me that Serena is actually sitting on something far more important.  Cloud computing is an increasing component of many an organisation's IT platform, and there will be a move away from the monolithic application towards a more composite one. By this, I mean that depending on the business' needs, an application will be built up from a set of functions on the fly to facilitate that process.  Through this means, an organisation can be far more flexible and can ensure that it adapts rapidly to changing market needs.

The concept of the composite application does bring in several issues, however.  Auditing what functions were used when is one of them.  Identifying the right functions to be used in the application is another.  Monitoring the health and performance of the overall process is another.

So, let's have a look at why Serena could be the one to offer this.

·         A composite application is made up from a set of discrete functions.  Each of these can be looked at as being an object requiring indexing and having a set of associated metadata.  Serena Dimensions CM is an object-oriented system that can build up metadata around objects in an intelligent manner.

·         Functions that are available to be used as part of a composite application need to be available from a library.  Dimensions is a library-based system.

·         Functions need to be pulled together in an intelligent manner, and instantiated as the composite application.  This is so close to a DevOps requirement that Dimensions should shine in its capabilities to carry out such a task.

·         Any composite application must be fully audited so that what was done at any one time can be demonstrated at a later date.  Dimensions has strong and complex versioning and audit capabilities, which would allow any previous state to be rebuilt and demonstrated as required at a later date.

·         Everything must be secure.  Dimensions has rigorous user credentials management - access to everything can be defined by user name, roll or function.  Therefore, the way that a composite application operates can be defined by the credentials of the individual user.

·         The "glue" between functions across different clouds needs to be put in place.  Unless cloud standards are improved drastically, getting different functions to work seamlessly together will remain difficult.  Some code will be required to ensure that Function A and Function B do work well together to facilitate Process C.  Dimensions is capable of being the centre for this code to be developed and used - and also as a library for the code to be stored and reused, ensuring that the minimum amount of time is lost in putting together a composite application as required.

Obviously, it would not be all plain sailing for Serena to enter such a market.  Its brand equity currently lies within the development market.  Serena would find itself in competition with the incumbent systems management vendors such as IBM and CA.  However, these vendors are still struggling to come to terms with what the composite application means to them - it could well be that Serena could layer Dimensions on top of existing systems to offer the missing functionality. 

Dimensions would need to be enhanced to provide functions such as the capability to discover and classify available functions across hybrid cloud environments.  A capacity to monitor and measure application performance would be a critical need - which could be created through partnerships with other vendors. 

Overall, Dimensions CM 14 is a good step forward in providing additional functionality to those in the DevOps space.  However, it has so much promise, I would like to see Serena take the plunge and see if it could move it through into a more business-focused capability.







It's all happening in the world of big data.

Clive Longbottom | 1 Comment
| More

For a relatively new market, there is a lot happening in the world of big data.  If we were to take a "Top 20" look at the technologies, it would probably read something along the lines of this week's biggest climber being Hadoop; biggest loser being relational databases and staying place being the less-schema databases.

Why?  Well, Actian announced the availability of its SQL-in-Hadoop offering.  Not just a small subset of SQL, but a very complete implementation.  Therefore, your existing staff of SQL devotees and all the tools they use can now be used against data stored in HDFS, as well as against Oracle, Microsoft SQLServer, IBM DB2 et al. 

Why is this important?  Well, Hadoop has been one of these fascinating tools that promises a lot - but only produces on this promise if you have a bunch of talented technophiles who know what they are doing.  Unfortunately, these people tend to be as rare as hen's teeth - and are picked up and paid accordingly by vendors and large companies.  Now, a lot of the power of Hadoop can be put in the hands of the average (still nicely paid) data base administrator (DBA).

The second major event that this could start to usher in is the use of Hadoop as a persistent store.  Sure, many have been doing this for some time, but at Quocirca, we have long advised that Hadoop only be used for its MapReduce capabilities with the outputs being pushed towards a SQL or noSQL database depending on the format of the resulting data, with business analytics being layered over the top of the SQL/noSQL pair.

With SQL being available directly into and out of Hadoop, new applications could use Hadoop directly, and mixed data types can be stored as SQL-style or as JSON-style constructs, with analytics being deployed against a single data store.

Is this marking the end for relational databases?  Of course not.  It is highly unlikely that those using Oracle eBusiness Suite will jump ship and go over to a Hadoop-only back end, nor will the vast majority of those running mission critical applications that currently use relational systems.  However, new applications that require large datasets being run on a linearly scalable, cost-effective, data store could well find that Actian provides them with a back end that works for them.

Another vendor that made an announcement around big data a little while back was Syncsort, which made its Ironcluster ETL engine available in AWS essentially for free - or at worst at a price where you would hardly notice it, and only get charged for the workload being undertaken.

Extract, transform and load (ETL) activities have for long been a major issue with data analytics, and solutions have grown around the issue - but at a pretty high price.  In the majority of cases, ETL tools have also only been capable of dealing with relational data - so making them pretty useless when it comes to true big data needs.

By making Ironcluster available in AWS, Syncsort is playing the elasticity card.  Those requiring an analysis of large volumes of data have a couple of choices - buy a few acres-worth of expensive in-house storage, or go to the cloud.  AWS EC2 (Elastic Compute Cloud) is a well-proven, easy access and predictable cost environment for running an analytics engine - provided that the right data can be made available rapidly.

Syncsort also makes Ironcluster available through AWS' Elastic MapReduce (EMR) platform, allowing data to be transformed and loaded directly onto a Hadoop platform.

With a visual front end and utilising an extensive library of data connectors from Syncsort's other products, Ironcluster offers users a rapid and relatively easy means of bringing together multiple different data sources across a variety of data types and creating a single data repository that can then be analysed.

Syncsort is aiming to be highly disruptive with this release - even at its most expensive, the costs are well below the costs for equivalent licence and maintenance ETL tools, and make other subscription-based service look rather expensive.

Big data is a market that is happening, but is still relatively immature in the tools that are available to deal with the data needs that underpin the analytics.  Actian and Syncsort are at the vanguard of providing new tools that should be on the shopping list of anyone serious about coming to terms with their big data needs.

The continuing evolution of EMC

Clive Longbottom | No Comments
| More

The recent EMCWorld event in Las Vegas was arguably less newsworthy for its product announcements than for the way that the underlying theme and message continues to move EMC away from the company that it was just a couple of years ago.

The new EMC II (as in "eye, eye", standing for "Information Infrastructure", although it might as well be as in Roman numerals to designate the changes on the EMC side of things) is part of what Joe Tucci, Chairman and CEO of the overall EMC Corporation calls "The Federation" of EMC II, VMware, and Pivotal.  The idea is that each company can still play to its strengths while symbiotically feeding off each other to provide more complete business systems as required.  More on this, later.

At last year's event, Tucci started to make the point that the world was becoming more software oriented, and that he saw the end result of this being the "software defined data centre" (SDDC) based on the overlap between the three main software defined areas of storage, networks and compute.  The launch of ViPR as a pretty far-reaching software defined suite was used to show the direction that EMC was taking - although as was pointed out at the time, it was more vapour than ViPR. Being slow to the flash storage market, EMC showed off its acquisition of XtremIO - but didn't really seem to know what to do with it.

On to this year.  Although hardware was still being talked about, it is now apparent that the focus from EMC II is to create storage hardware that is pretty agnostic as to the workloads thrown at it, whether this be file, object or block. XtremIO has morphed from an idea of "we can throw some flash in the mix somewhere to show that we have flash" to being central to all areas.  The acquisition of super-stealth server-side flash outfit DSSD only shows that EMC II does not believe that it has all the answers yet - but is willing to invest in getting them and integrating them rapidly.

However, the software side of things is now the obvious focus for EMC Corp.  ViPR 2 was launched and now moves from being a good idea to a really valuable product that is increasingly showing its capabilities to operate not only with EMC equipment, but across a range of competitors' kit and software environments as well.  The focus is moving from the SDDC to the software defined enterprise (SDE), enabling EMC Corp to position itself across the hybrid world of mixed platforms and clouds.

ScaleIO, EMC II's software layer for creating scalable storage based on commodity hardware underpinnings was also front and centre in many aspects.  Although hardware is still a big area for EMC Corp, it is not seen as being the biggest part of the long term future.

EMC Corp seems to be well aware of what it needs to do.  It knows that it cannot leap directly from its existing business of storage hardware with software on top to a completely next generation model of software which is less hardware dependent without stretching to breaking point its existing relationships with customers and the channel - as well as Wall Street.  Therefore, it is using an analogy of 2nd and 3rd platforms, along with a term of "digital born" to identify where it needs to apply its focus.  The 2nd Platform is where most organisations are today: client/server and basic web-enabled applications.  The 3rd Platform is where companies are slowly moving towards - one where there is high mobility, a mix of different cloud and physical compute models and an end game of on-the-fly composite applications being built from functions available from a mix of private and public cloud systems. (For anyone interested, the 1st Platform was the mainframe).

The "digital born" companies are those that have little to no legacy IT: they have been created during the emergence of cloud systems, and will already be using a mix of on-demand systems such as Microsoft Office 365, Amazon Web Services, Google and so on.

By identifying this basic mix of usage types, Tucci believes that not only EMC II, but the whole of The Federation will be able to better focus its efforts in maintaining current customers while bringing on board new ones.

I have to say that, on the whole, I agree.  EMC Corp is showing itself to be remarkably astute in its acquisitions, in how it is integrating these to create new offerings and in how it is changing from a "buy Symmetrix and we have you" company to a "what is the best system for your organisation?" one.

However, I believe that there are two major stumbling blocks.  The first is that perennial problem for vendors - the channel.  Using a pretty basic rule of thumb, I would guess that around 5% of EMC Corp's channel gets the new EMC and can extend it to push the new offerings through to the customer base.  A further 20% can be trained in a high-touch model to be capable enough to be valuable partners.  The next 40% will struggle - many will not be worth putting any high-touch effort into, as the returns will not be high enough, yet they constitute a large part of EMC Corp's volume into the market.  At the bottom, we have the 35% who are essentially box-shifters, and EMC Corp has to decide whether to put any effort into these.  To my mind, the best thing would be to work on ditching them: the capacity for such channel to spread confusion and problems in the market outweighs the margin on the revenues they are likely to bring in.

This gets me back to The Federation.  When Tucci talked about this last year, I struggled with the concept.  His thrust was that EMC Corp research had shown that any enterprise technical shopping list has no more than 5 vendors on it.  By using a Federation-style approach, he believed that any mix of the EMC, VMware and Pivotal companies could be seen as being one single entity.  I didn't, and still do not buy this.

However, Paul Maritz, CEO of Pivotal put it across in a way that made more sense.  Individuals with the technical skills that EMC Corp require could go to a large monolith such as IBM.  They would be compensated well; would have a lot of resources at their disposal; they would be working in an innovative environment.  However, they would still be working for a "general purpose" IT vendor.  By going to one of the companies in EMC Corp's Federation, in EMC II they are working for a company that specialises in storage technologies; if they go to VMware, they are working for a virtualisation specialist; for Pivotal, a big data specialist - and each has its own special culture. For many individuals, this difference is a major one. 

Sure, the devil remains in the detail, and EMC Corp is seeing a lot of new competition coming through into the market.  However, to my mind it is showing a good grasp of the problems it is facing and a flexibility and agility that belies the overall size and complexity of its corporate structure and mixed portfolio.

I await next year's event with strong interest.

Enhanced by Zemanta

Finding new containers for the BYOD genii

Rob Bamforth | No Comments
| More

Many headline IT trends are driven by organised marketing campaigns and backed by industry players with an agenda - standards initiatives, new consortia, developer ecosystems - and need a constant push, but others just seem to have a life of their own.


BYOD - bring your own device - is one such trend. There is no single group of vendors in partnership pushing the BYOD agenda; in fact most are desperately trying to hang onto its revolutionary coattails.  They do this in the face of IT departments around the world who are desperately trying to hang on to some control.


BYOD is all about 'power to the people' - power to make consumer-led personal choices and this is very unsettling for IT departments that are tasked with keeping the organisations resources safe, secure and productive.


No wonder that according to Quocirca's recent research from 700 interviews across Europe, over 23% only allow BYOD in exceptional circumstances, and a further 20% do not like it but feel powerless to prevent it. Even among those organisation that embrace BYOD, most still limit it to senior management.


This is typical of anyone faced by massive change; shock, denial, anger and confusion all come first and must be dealt with before understanding, acceptance and exploitation take over.


IT managers and CIOs have plenty to be shocked and confused about. On the one hand, they need to empower the business and avoid looking obstructive, but on the other, there is a duty to protect the organisation's assets. Adding to the confusion, vendors from all product categories have been leaping on the popularity of the BYOD bandwagon and using it as a way to market their own products.


The real challenge is that many of the proposed 'solutions' perpetuate a myth about BYOD that is unfortunately inherent in its name, but also is damaging the approach taken to addressing the issues BYOD raises.


The reality is that this is not and should not be centred around the devices or who owns them, but on the enterprise use to which they are put.


The distinction is important for a number of reasons.


First, devices. There are a lot to choose from already today,  with different operating systems, in different form factors - tablets, smartphones etc. - and there is no reason to think this is going to get any simpler. If anything, with wearable technologies such as smart bands, watches and glasses already appearing, the diversity of devices is going to become an even bigger challenge.


Next, users. What might have started as an 'I want' (or even an "I demand") from a senior executive, soon becomes an 'I would like' from knowledge workers, who now appear to be the vanguard for BYOD requests. But this is only the start as the requirement moves right across the workforce. Different roles and job responsibilities will dictate that different BYOD management strategies will have to be put in place. Simply trying to manage devices (or control choices) will not be an option.


Those who appear to be embracing rather than trying to deny BYOD in their organisations understand this. Their traits are that they tend to recognise the need to treat both tablets and smartphones as part of the same BYOD strategy and they are already braced for the changes that will inevitably come about from advances in technology.


Most crucially, however, they recognise the importance of data.


Information security is the aspect of BYOD most likely to keep IT managers awake at night - it is a greater concern than managing the devices themselves or indeed the applications they run.


The fear of the impact of data security, however, seems to have created a 'deer in the headlights' reaction rather than galvanising IT into positive action. Hence the tendency to try to halt or deny BYOD in pretty much the same way that in the past many tried to stem the flow towards internet access, wireless networks and pretty much anything that opens up the 'big box' that has historically surrounded an organisation's digital assets.


Most organisations would do far better to realise that the big box approach is no longer valid, but that they can shrink the concept down to apply 'little boxes' or bubbles of control around their precious assets. This concept of containerisation or sandboxing is not new, but still has some way to go in terms of adoption and widespread understanding.


Creating a virtual separation between personal and work environments allows the individual employee to get the benefit of their own device preferences, and for the organisation to apply controls that are relevant and specific to the value and vulnerability of the data.


With the right policies in place this can be adapted to best-fit different device types and user profiles. Mobile enterprise management is still about managing little boxes, but virtual ones filled with data, not the shiny metal and plastic ones in the hands of users.


For more detailed information about getting to grips with BYOD, download our free report here

Enhanced by Zemanta

Print security: The cost of complacency

Louella Fernandes | No Comments
| More

Quocirca research reveals that enterprises place a low priority on print security despite over 60% admitting that they have experienced a print-related data breach.

Any data breach can be damaging for any company, leaving it open to fines and causing damage to its reputation and undermining customer confidence. In the UK alone, the Ponemon Institute estimates that in 2013, the average organisational cost to a business suffering a data breach is now £2.04m, up from £1.75m in the previous year.

As the boundaries between personal and professional use of technology become increasingly blurred, the need for effective data security has never been greater. While many businesses look to safeguard their laptops, smartphones and tablets from external and internal threats, few pay the same strategic attention to protecting the print environment. Yet it remains a critical element of the IT infrastructure. Over 75% of enterprises in a recent Quocirca study indicating that print is critical or very important to their business activities.

The print landscape has changed dramatically over the past decade. Local single function printers have given way to the new breed of networked multifunction peripherals (MFPs). With print, fax, copy and advanced scanning capabilities, these devices have evolved to become sophisticated document capture and processing hubs.

While they have undoubtedly brought convenience and enhanced user productivity to the workplace, they also pose security risks. They have built in network connectivity, along with hard disk and memory storage, MFPs are susceptible to many of the same security vulnerabilities as any networked device.

Meanwhile, the move to a centralised MFP environment means more users are sharing devices.  Without controls, documents can be collected by unauthorised users - either accidentally or maliciously. Similarly, confidential or sensitive documents can be routed in seconds to unauthorised recipients, through scan to email, scan to file and scan to cloud storage functionality. Further controls are required as  employees print more and more direct from mobile devices.

Yet many enterprises are not taking heed. Quocirca's study revealed that just 22% place a high priority on securing their print infrastructure. While financial and professional services sector consider print security a much higher priority, counterparts in the retail, manufacturing and the public sectors lag way behind.

Such complacency is misplaced. Overall 63% admitted they have experienced a print-related data breach. An astounding 90% of public sector respondents admit to one or more paper-based data breaches.

So how can businesses minimise the risks? Fortunately thereare simple and effective approaches to protecting the print infrastructure. These methods not only enhance document security, but also promote sustainable printing practices - reducing paper wastage and costs.

1. Conduct a security assessment

For enterprises with a large and diverse printer fleet, it is advisable to use a third party provider to assess device, fleet and enterprise document security. This can evaluate all points of vulnerability across a heterogeneous fleet and provide a tailored security plan, for devices, user access and end of life/disposal. Managed print service (MPS) providers commonly offer this as part of their assessment services.

2. Protect the device.

Many MFPs come as standard with hard drive encryption and data overwrite features. Most also offer lockable and removable hard drives. Data overwriting ensures that the hard drive is clear of readable data when the device is disposed of. It works by overwriting the actual data with random and numerical characters. Residual data can be completely erased when the encrypted device and the hard disk drive are removed from the MFP.

3. Secure the network

MFP devices can make use of several protocols and communication methods to improve security. The most common way of encrypting print jobs is SSL (secure socket layer) makes it safe for sensitive documents to be printed via a wired or wireless network. Xerox, for instance, has taken MFP security a step further by including McAfee Embedded Control technology which uses application whitelisting technology to protect its devices from corrupt software and malware.

4. Control access

Implementing access controls through secure printing ensures only authorised users are able to access MFP device functionality. Also known as PIN and pull printing, print jobs can be saved electronically on the device, or on an external server, until the authorised user is ready to print them. The user provides a PIN code or uses an alternative authentication method such as a swipe card, proximity card or fingerprint. As well as printer vendor products there a range of third party products including Capella's MegaTrack, Jetmobile's SecureJet, Equitrac's Follow-You and Ringdale's FollowMe, all of which are compatible with most MFP devices.

5. Monitor and audit

Print environments are often a complex and diverse mix of products and technologies, further complicating the task of understanding what is being printed, scanned and copied where and by whom. Enterprises should use centralised print management tools to monitor and track all MFP related usage. This can either be handled in-house or through an MPS provider.

With MFPs increasingly becoming a component of document distribution, storage and management, organisations need to manage MFP security in the same way as the rest of the IT infrastructure. By using the appropriate level of security for their business needs, an organisation can ensure that it's most valuable asset--corporate data--is protected.

Read Quocirca's report A False Sense of Security

Enhanced by Zemanta

Internet of Things - Architectures of Jelly

Rob Bamforth | No Comments
| More

In today's world of acronyms and jargon, there are increasing references to the Internet of things (IoT), machine to machine (M2M) or a 'steel collar' workforce. It doesn't really matter what you call it, as long as you recognise it's going to be BIG. That is certainly the way the hype is looking - billions of connected devices all generating information - no wonder some call it 'big data', although really volume is only part of the equation.


Little wonder that everyone wants to be involved in this latest digital gold rush, but let's look a little closer at what 'big' really means.


Commercially it means low margins. The first wave of mobile connectivity - mobile email - delivered to a device like a BlackBerry, typically carried by a 'pink collar' executive (because they bought their stripy shirts in Thomas Pink's in London or New York) was high margin and simple. Mobilising white-collar knowledge workers with their Office tools was the next surge, followed by mobilising the mass processes and tasks that support blue-collar workers.


With each wave volumes rise, but so too do the challenges of scale - integration, security and reliability - whilst the technology commoditises and the margins fall. Steel collar will only push this concept further.


Ok, but the opportunity is BIG, so what is the problem?


The problem is right there in the word 'big'. IoT applications need to scale - sometimes preposterously - so much so that many of the application architectures that are currently in place or being developed are not adequately taking this into account.


Does this mean the current crop of IoT/M2M platforms are inadequate?


Not really, as the design fault is not there, but generally further up in the application architectures. IoT/M2M platforms are designed to support the management and deployment of huge numbers of devices, with cloud, billing and other services that support mass rollouts especially for service providers.


Reliably scaling the data capture and its usage is the real challenge, and if or when it goes wrong, "Garbage in, Garbage out" (GiGo) will be the least of all concerns.


Several 'V's are mentioned when referring to big data; volume of course is top of mind (some think that's why it's called 'big' data), generally followed by velocity for the real-timeliness and trends, then variety for the different forms or media that will be mashed together. Sneaking along in last but one place is the one often forgotten, but without which the whole of the final 'V' - value - is lost - veracity. It has to be accurate, correct and complete.


When scaling to massive numbers of chattering devices, poor architectural design will mean that messages are lost, packets dropped and the resulting data may be not quite right.


Ok, so my fitness band lost a few bytes of data, big deal, even if a day is lost, right? Or my car tracking system skipped a few miles of road - what's the problem?


It really depends on the application, how it was architected and how it deals with exceptions and loss. This is not even a new problem in the world of connected things - supervisory control and data acquisition (SCADA) - that has been in existence since well before the internet and its things.


The recent example of problem data from mis-aligned electro-mechanical electricity meters in the UK shows just how easy this can happen, and how quickly the numbers can get out of hand. Tens of thousands of precision instruments had inaccurate clocks, but consumers and supplier alike thought they were ok, until a retired engineer discovered a fault in his own home that led to the unearthing that thousands of people had been overcharged for their electricity.


And here is the problem, it's digital now and therefore perceived to be better; companies think the data is ok, so they extrapolate from it and base decisions on it, and in the massively connected world of IoT, so perhaps does everyone else. The perception of reality overpowers the actual reality.


How long ago did your data become unreliable; do you know, did you check, who else has made decisions based on it? The challenge of car manufacturers recalling vehicles will seem tiny compared to the need for terabyte recalls.


Most are rightly concerned about the vulnerability of data on the internet of people and how that will become an even bigger problem with the internet of things. However, that aside, there is a pressing need to get application developers thinking about resilient, scalable and error-correcting architectures, otherwise the IoT revolution could have collars of lead, not steel and its big data could turn out to be really big GiGo.


Enhanced by Zemanta






Managing a PC estate

Clive Longbottom | No Comments
| More

Although there is much talk of a move towards virtual desktops, served as images from a centralised point, for many organisations, the idea does not appeal.  Whatever the reason (and there may be many as a previous blog here points out), staying with PCs leaves the IT department with a headache - not least an estate of decentralised PCs that need managing.

Such technical management tends to be the focus for IT; however, for the business, there are a number of other issues that also need to be considered.  Each PC has its own set of applications.  The majority of these should have been purchased and installed through the business, but many may have been installed directly by the users themselves, something you may want to avoid, but is nowadays an expectation of many IT users.

This can lead to problems: some applications may not be licenced properly (for example, a student licence not permitted for use in a commercial environment); it may contain embedded malware (a recent survey has shown that much pirated software contains harmful payloads including keyloggers); it definitely opens up an organisation to considerable fines should a unlicensed software be present and a software audit be carried out by an external body.

Locking down desktops is increasingly difficult. Employees are getting very used to self-service through their use of their own devices, and expect this within a corporate environment.  Centralised control of desktops is still required - even if virtual desktops are not going to be the solution of choice.

The first action your organisation should take is for a full audit.  You need to fully understand how many PCs there are out there; what software is installed and whether that software is being used or not.  You need to know how many licences for software you have in place and how those can be utilised - for example, are they concurrent licences (a fixed number of people can use them at the same time), or named seat licences (only people with specific identities can use them).

This will help to identify software that your organisation was not aware of, and can also help in identifying unused software sitting idle on PCs.

You can then look at creating an image that contains a copy of all the software that is being used by people to run the business.  Obviously, you do not want every user within your organisation to have access to every application, so something is needed to ensure that each person can be tied in by role or name to a list of software to which they should have access.

Through the installation of an agent on each PC, it should then be possible to apply centralised control over what is happening.  That single golden image containing all allowable applications can then be called upon by that agent as required.  The user gets to see all the applications that they are allowed to access (by role and/or individual policy), and a virtual registry can be created for their desktop.  Should anything happen to that desktop (machine failure, disk corruption, whatever), a new environment can be rapidly built against a new machine.

If needed, virtualisation can be used to hive off a portion of the machine for such a corporate desktop - the user can then install any applications that they want to within the rest of the device.  Rules can be applied to prevent data crossing the divide between the two areas, keeping a split between the consumer and corporate aspects of the device - a great way of enabling laptop-based bring your own device (BYOD).

As with most IT, the "death" of any technology will be widely reported and overdone: VDI does not replace desktop computing for many.  However, centralised control should still be considered - it can make management of an IT estate - and the information across that estate - a lot easier.

This blog first appeared on FSlogix' site at http://blog.fslogix.com/managing-a-pc-estate 

Enhanced by Zemanta

Web security 3.0 - is your business ready?

Bob Tarzey | No Comments
| More

As the web has evolved so have the security products and services that control our use of it. In the early days of the "static web" it was enough to tell us which URLs to avoid because the content was undesirable (porn etc.) As the web became a means distributing malware and perpetrating fraud, there was a need to identify bad URLs that appeared overnight or good URLs that had gone bad as existing sites were compromised. Early innovators in this area included Websense (now a sizable broad-base security vendor) and two British companies SurfControl (that ended up as part of Websense) and ScanSafe that was acquired by Cisco.

 

Web 2.0

These URL filtering products are still widely used to control user behaviour (for example, you can only use Facebook at lunch time) as well as block dangerous and unsavoury sites. They rely on up to date intelligence about all the URLs out there and their status. Most of the big security vendors have capability in this area now. However, as the web became more interactive (for a while we all called this Web 2.0) there was a growing need to be able to monitor the sort of applications that were being accessed via the network ports typically used for web access; port 80 (for HTTP) and port 443 (for HTTPS). Again this was about controlling user behaviour and blocking malicious code and activity.

 

To achieve this firewalls had to change; enter the next generation firewall. The early leader in this space was Palo Alto Networks. The main difference with its firewall was that it was application aware with a granularity that could work within a specific web site (for example, applications running on Facebook). Just as with the URL filtering vendors, next generation firewalls rely on application intelligence, the ability to recognise a given application by its network activity and allow or block it according to user type, policy etc. Palo Alto Networks built up its own application intelligence, but there were other databases, such as FaceTime (a vendor that found itself in a name dispute with Apple) which was acquired by Check Point as it upgraded its firewalls. Other vendors including Cisco's Sourcefire, Fortinet and Dell's SonicWALL have followed suit.

 

The rise of shadow IT

So with URLs and web applications under control, is the web is a safer place? Well yes, but the job is never done. A whole new problem has emerged in recent years with the increasing ability for users to upload content to the web. The problem has become acute as users increasingly provision cloud services over the web for themselves (so called shadow IT). How do you know which services are OK to use? How do you even know which ones are in use? Again this is down to intelligence gathering, a task embarked on by Skyhigh Networks in 2012.

 

Skyhigh defines a cloud service as anything that has the potential to "exfiltrate data"; so this would include Dropbox and Facebook, but not the web sites of organisations such as CNN and the BBC. Skyhigh provides protection for businesses, blocking its users from accessing certain cloud services based on its own classification (good, medium, bad) providing a "Cloud Trust" mark (similar to what Symantec's Verisign does for websites in general). As with URL filtering and next generation firewalls, this is just information, rules about usage need to be applied. Indeed, Skyhigh can provide scripts to be applied to firewalls to enforce rules around the use of cloud services.

 

However, Skyhigh cites other interesting use cases. Many cloud services of are of increasing importance to businesses; LinkedIn is used to manage sales contacts, Dropbox, Box and many other sites are used to keep backups of documents created by users on the move. Skyhigh gives businesses insight into their use, enables it to impose standards and, where subscriptions are involved, allows usage to be aggregated into to single discounted contracts rather than being paid for via expenses (which is often a cost control problem with shadow IT). It also provides enterprise risk scores for a given business based on its overall use of cloud services.

 

Beyond this, Skyhigh can assert controls over those users working beyond the corporate firewall, often on their own devices. For certain cloud services for which access is provided by the business (think salesforce.com, ServiceNow, SuccessFactors etc.), without need for an agent, usage is forced back via Skyhigh's reverse proxy so that usage is monitored and controls enforced. Skyhigh can also recognise anomalous behaviour with regard to cloud services and thus provide an additional layer of security against malware and malicious activity.

 

Skyhigh is the first to point out that it is not an alternative to web filtering and next generation firewalls but complimentary to them. Skyhigh, which mostly provides it service on-demand, is already starting to co-operate with existing vendors to enhance their own products and services through partnerships. So your organisation may be able to benefit from its capabilities via an incremental upgrade from an existing supplier rather a whole new engagement. So, that is web security 3.0; the trick is to work out what's next - roll on Web 4.0!

 

Enhanced by Zemanta

Two areas where businesses can learn from IT

Bob Tarzey | No Comments
| More

Many IT industry commentators (not least Quocirca) constantly hassle IT managers to align their activities more closely with those of the businesses they serve; to make sure actual requirements are being met. However, that does not mean that lines of business can stand aloof from IT and learn nothing from the way their IT departments manage their own increasingly complex activities. Two recent examples Quocirca has come across demonstrate this.

 

Everyone needs version control

First, take the tricky problem of software code version control. Many outside of IT will be familiar with the problem, at least at a high level, through the writing and review of documents. For many this is a manual process carried out at the document name level; V1, V1.1, V1.1A, V2.01x etc. Content management (CM) systems, such as EMC's Documentum and Microsoft's SharePoint can improve things a lot, automating versioning, providing checking and checkout etc. (but they can be expensive to implement across the business).

 

With software development the problem is a whole lot worse, the granularity of controls needs to be down to individual lines of code and there are multiple types of entities involved; the code itself, usually multiple files linked together by build scripts (another document), binary files that are actually deployed in test and then live environments, documentation (user guides etc.), third party/open source code that is included and so on. As result the version control systems from vendors such as Serena, IBM Rational and a number of open source systems that have been developed over the years to support software development are very sophisticated.

 

In fairly technical companies, where software development is a core activity, the capability of these systems is so useful that it has spread well beyond the software developers themselves. Perforce Software, another well-known name in software version control, estimates that 68% of its customers are storing non-software assets in its version control system. It customers include some impressive names with lots of users, for example; salesforce.com, NYSE, Netflix and Samsung.

 

To capitalise on this increasing tendency of its customers to store non-IT assets Perforce has re-badged its system as Perforce Commons and made it available as an online service as well as being available for on-premise deployment. All the functionality developed can be used for the management of whole range of other business assets. With the latest release this now includes merging Microsoft PowerPoint and Word documents and checking for difference between various versions of the same document. Commons also keeps a full audit trail of document changes, which is important for compliance in many document-based work flows.

 

Turing up the Heat in ITSM

The second area where Quocirca has seen IT management tools being used beyond the business is in IT service management (ITSM). FrontRange's Heat tool is traditionally used for handing support incidents related to IT assets raised by users (PCs, smartphones, software tools etc.) However, increasingly its use is being extended beyond IT to other departments, for example to manage incidents relating to customer service calls, human resources (HR) issues, facilities management (FM) and finance department requests. Heat is also available as an on-demand service as well as an on-premise tool, in many cases deployments are a hybrid of the two.

 

Of course, there are specialist tools for CM, HR, FM and so on; specially designed for the job with loads of functionality. However, with budgets and resources stretched, IT departments that already use tools such as Perforce version management and Heat ITSM can quickly add value to whole new areas of the business with little extra cost. Others, that are not already customers, may be able to kill several birds with one stone as they seek to show the business that IT can deliver extra beyond its own interests with little incremental cost.

 

Enhanced by Zemanta

4.7 Million NTP Servers Ready To Boost DRDoS Attack Volumes

Bernt Ostergaard | No Comments
| More

The US Internet securty organisation CERT has published a warning of increasing DRDoS (Distributed Reflection and amplification DDoS) attacks using Internet Service Providers' NTP (Network Time Protocol) servers (http://www.kb.cert.org/vuls/id/348126). According to their analysis NTP is the second most widely used vehicle for DDoS attacks (after DNS). In plain language that means, if I want to take a victim web site down, I can send a spoofed message to a vulnerable ISP NTP server and get it to send a response that is several thousand times longer to my intended victim! That is amplification in action.

A request could look like this:

ntpq -c rv [ip]

The payload is 12 bytes which is the smallest payload that will illicit a mode 6 response. The response from a delinquent ISP could be this:

associd=0 status=06f4 leap_none, sync_ntp, 15 events, freq_mode, version="ntpd 4.2.2p1@1.1570-o Tue Dec  3 11:32:13 UTC 2013 (1)", processor="x86_64", system="Linux/2.6.18-371.6.1.el5", leap=00,stratum=2, precision=-20, rootdelay=0.211, rootdispersion=133.057, peer=39747, refid=xxx.xxx.xxx.xxx, reftime=d6e093df.b073026f  Fri, Mar 28 2014 19:35:43.689, poll=10, clock=d6e09aff.0bc37dfd  Fri, Mar 28 2014 20:06:07.045, state=4, offset=-17.031, frequency=-0.571, jitter=5.223, noise=19.409, stability=0.013, tai=0

 

That is only a 34 times amplification. Crafting the request differently could boost attack volumes by up to 5500 times according to CERT. The response shows that this ISP last updated its NTP software in Dec 2013 to version 4.2.2p1. The CERT recommendation is that NTP server should at least be using version 4.2.7p26.

So, how widespread is the problem? The Shadowserver Foundation performs on-going monitoring of the problem - and at present, it has discovered 4.7 million vulnerable NTP servers across the globe (https://ntpscan.shadowserver.org). So this ISP is part of a very large delinquent group of ISPs. The Shadowserver monitoring activity also clearly shows that the problem is most severe in the US followed by Europe, Japan, South Korea, Russia and China in that order.

Screen Shot 2014-04-02 at 13.34.14.png

The global map from Shadowserver.org shows the distribution of vulnerable NTP servers - yellow shows the highest density.

Responsibility for Internet safety is clearly a shared responsibility involving all user groups, which means that we as users need to keep our service providers on their toes, and Shadowserver enshrines this principle. It is a volunteer group of professional Internet security workers that gathers, tracks and reports on malware, botnet activity and electronic fraud. It aims to improve the security of the Internet by raising awareness of the presence of compromised servers, malicious attackers and the spread of malware. In this respect, I would like to 'amplify' their message.


The Top 3 Barriers to VDI

Clive Longbottom | No Comments
| More

The use of server-based desktops, often referred to as a virtual desktop infrastructure (VDI), makes increasing sense for many organisations.  Enabling greater control over how a desktop system is put together; centralising management and control of the desktops as well as the data created by the systems; helping organisations to embrace bring your own device (BYOD) and enhancing security are just some of the reasons why more organisations are moving toward the adoption of VDI.

However, in Quocirca's view, there remain some major issues in VDI adoption.  Our "Top 3" are detailed here:

  •  Management.  Imagine that you are tasked with managing 1,000 desktops.  Your OS vendor pushes out a new security patch.  You have to patch 1,000 desktops.  With VDI, at least you do not have to physically visit 1,000 desks, right?  Maybe so - but it is still an issue, and with application updates coming thick and fast and the possibility that one single patch could cause problems with some proportion of the VDI estate puts many IT departments off such updates for fear of causing problems, so leading to sub-optimised desktops and possible security issues.
  • Licencing.  The phantom of better control when the desktops are all in one place in the datacentre can soon become less believable.  Unless solid controls and capable management tools are in place, the number of orphan (unused but live) images can rapidly get out of control.  Desktops belonging to people who have left the company do not get deleted; test images get spun up and forgotten about; copies of images get made and forgotten about.  Each of these images - as well as using up valuable resource - needs to be licenced.  Each requires an operating system licence along with all the application licences that are live within that hot image, even though it is not being used. Many organisations go for a costly site licence to avoid this issue rather than attempting to deal with it.
  • Storage costs.  The move from local OS and application storage on the desktop PC to the data centre can be expensive.  Old-style enterprise storage, such as a SAN or dedicated high-performance storage arrays, has high capital and maintenance costs. A more optimised use of, for example, newer virtualised direct attached storage, virtual storage areas networks or software-defined storage (SDS) approaches using underlying cheaper storage and compute arrays from vendors such as Coraid or Nutanix can provide the desired performance while keeping costs under control.

So, does this mean that VDI is more trouble than it is worth?  Not if it is approached in the right way.  The use of "golden images", where as few a number as possible of images are held, may hold the key.

Many VDI vendors that push this approach will start off with maybe four or five main golden images - one for task workers, one for knowledge workers, one for special cases and one for executives, say - but will then still face the problems of having these spin up and staying live on a per-user basis.  Managing the images still requires either patching all live images, or patching the golden images and forcing everyone to refresh their desktops by powering them down and back up again - not much easier than with physical desktops.  Dealing with leavers still needs physical processes to be in place, otherwise licencing still becomes an issue.

A better approach is to use a single golden image that can be used to automatically provision desktops on a per-user basis and automatically manages how software is made available and managed on an on-going basis.  This requires an all-embracing golden image: it needs a copy of every application that will be used within the organisation present in the image - and it needs a special means of dealing with how these applications are provisioned or not as the case may be, to manage the licensing of each desktop. By virtualising the desktop registry and linking this through to role and individual policies in Active Directory, this can be done: as each user utilises their own desktop, it can dynamically be managed through a knowledge of what the virtual registry holds and what rights they have as a user.

The data held in the virtual registry also enables closer monitoring and auditing of usage to be made: by looking at usage statistics, orphan images can be rapidly identified and closed down as required.  Unused licences can be harvested and put back into the pool for others to use - or can be canned completely and used to lower licencing costs with the vendor.

VDI is not the answer for everyone, but it is an answer that meets many organisations' needs in a less structured world.  If you have been put off VDI in the past for any of the reasons discussed above, then maybe it is time to reconsider.  Today's offerings in the VDI market are considerably different to what was there only a couple of years back.

This item was first posted on the FSLogix blog at http://blog.fslogix.com/ 

Enhanced by Zemanta






6 Tips for Managed Print Services Success

Louella Fernandes | No Comments
| More

Many enterprises are turning to a managed print service (MPS) to minimise the headache of managing an often complex and costly print environment. Through device consolidation, optimisation of hardware and software and continuous monitoring, MPS is helping to reduce costs - both financial and environmental, lower the IT print management burden and improve user productivity.

MPS is extending beyond the enterprise print environment to address requirements of mobile and remote workers, as well as encompass IT infrastructure and business process automation needs. Whilst some enterprises may be at the early stages of their MPS journey, many are now entering their second or third generation MPS contracts.

Although cost control remains a top priority, enterprises are also looking to drive wider productivity and business process improvements.  Consequently enterprises are looking for next generation MPS providers to become true innovation partners with industry-specific business insight and services that will deliver new cost savings.

Below are some recommendations on how to maximise the benefit from MPS, and ensure it can help drive greater business value and sustained long term performance.

      1.       Think big, start small

MPS engagements vary widely in scope depending on business needs. New opportunities exist to not only extend the scope of MPS engagements to encompass all aspects of enterprise printing (office, mobile, production and commercial), but also improve performance by outsourcing higher-value services such as IT operations and business processes. Consider how well the scope of services matches your business needs? Can you start with a limited engagement and add services as business requirements evolve and/or you relationship with your MPS provider beds in.

2.       Conduct a full evaluation of the print infrastructure

A detailed assessment is the foundation of an effective MPS engagement and should take a holistic view of all print-related processes. Things to check include: is an established methodology used? What scalability is offered in terms of depth and cost of assessments? At the minimum, this should include a full document analysis that analyses print usage across the enterprise. Additional assessment services to consider include environmental and document security. Some vendors also offer document workflow assessment services, which identify potential for business process improvements. A comprehensive assessment will ensure the greatest opportunities for cost savings and productivity improvements over the long term of a contract.

 

                3.    Evaluate the flexibility to add new services

As a business continually adapts to the marketplace, MPS agreements should be adaptable as well - in terms of the commercial offering, contract arrangements, staffing and delivery location etc. When agreeing on the service offering, negotiate for the flexibility to incorporate new capabilities. For example, next generation MPS may look to take advantage of evolving technologies such cloud, mobility, business intelligence and ITIL-based process methodologies to ensure that business objectives continue to be met throughout the duration of the contract. 

 

4.                  4.    Leverage MFP sophistication

Multifunction peripherals (MFPs) are often underutilised in the office environment, yet have powerful document workflow capabilities that can be integrated with key business processes such as HR, legal, and finance and accounting. Leading MPSs allow seamless integration of MFPs, either via the cloud or on-premise, with vertical applications, optimising paper workflow and improving productivity. 

 

5.                5.    Ensure mature service level quality

SLAs are critical to the success of any MPS engagement. SLAs have to be flexible, and the MPS provider must use analytics to be able to advise on past performance and future requirements - and to offer a range of different approaches based against the customer's own risk profile, balancing risk, cost and perceptions of added business value. Are service levels matched to your business needs (hours of services, problem resolution times, end-user productivity)? How does the provider handle service events in multivendor environment? Is a pre-emptive service used to reduce response times and solve device problems? Is onsite or off-site support available?

6.       Continuous improvement.

Monitoring and ongoing management is critical to ensure that the MPS adapts to changing business needs. This requires governance throughout the contract which should place high emphasis on service analytics, reporting and communication. A governance programme allows the parties to evaluate address and resolve service issues as and when they arise.


Read Quocirca's report on The Next Frontier for MPS

Enhanced by Zemanta

TV, phone, tablet, watch - are we smart yet?

Rob Bamforth | No Comments
| More

At one time things were pretty straightforward; entertainment centred on a TV, computing around a PC, communication around a phone. Now, convergence has blurred so many lines; it should be no surprise that everything looks like a cloud.


The biggest changes have been in IT. Not long ago, the desktop computer was at the centre of IT. Everything else was a peripheral; not only the obvious things like printers and scanners, but even mobile devices such as handheld PDAs, e.g. PalmPilots were described as 'companions'.


As the number of computing devices and their form factors has multiplied, every so often it is suggested there might be a single unifying device that does everything. This has a lot of appeal, just like owning a Swiss Army knife, but equally, how many Swiss Army knife owners carry them about their person every day?


The all-embracing single device that does everything never quite seems to arrive, as it is too much of a compromise. So most people end up carrying a subset of a collection of devices or personal device ecosystem - laptop, camera, phone, tablet etc. - the mix of which is determined by needs of the day and the constraints or capabilities of wardrobe and associated 'baggage'. This is important as although many people like to think they are now very technology aware, most are not only fashion or fad conscious, but also hate lugging around too much stuff.


Paring down to the essentials indicates what is most important - often a single device is the one that no one can do without and the others are regarded as peripheral.

 

So where now is the centre of techno-attention?  It is rarely the desktop, and for many it is the mobile phone, combined with a drift away from the previous favourite desktop replacement on the move, the laptop, towards the tablet.


This is not without its challenges; for many the typewriter keyboard is a hard habit to kick, but smaller and lighter means more portable and touch screens have created for many a more natural way of interaction. The keyboard, though is still apparently vital for 'real content creation', but generally large and clunky, yet how many will admit to learning how to use one properly through completing courses such as touch typing rather than relying on the two-fingered jab?


Once the need for a keyboard has been ignored, screen size becomes the next physical factor to align around. But what is the right size for a screen? Apple laboured long and hard and Steve Jobs had been adamant that there was a 'Goldilocks' screen size (just right). Despite this thought, the industry seems to disagree and screen sizes for all sorts of devices have expanded into a huge multiplicity of options - most of them too big to hold next to an ear.


So, is the tablet then the new centre of attention?


Possibly for some, and given how they are so casually used in formal as well as informal and relaxed circumstances, the tablet does now seem to have a very high degree of importance, but the centre? No, the majority of tablets still have Wi-Fi only mobile connectivity as they are often used flexibly but within a 'place', and are then paired with another device for connectivity on the move - a mobile phone - making that still the prime device.


Two other areas have started to become much smarter and their effect on the centre of gravity of attention is intriguing, but not decisive. The more established is smart TVs, which are starting to deliver on the some of the promises of WebTV and the convergence of PCs and TVs much trialed and hyped in the early days of the internet in the 1990s.


However, these devices have fundamental flaws. Sometimes it is poor execution of software by companies who are (let's face it) unused to the rapid rate of revision of software. Somehow, upgrading a TV just to get the latest version of some player app, which isn't supported on the current box, isn't going to be a priority for most people.


The other flaw is in the usage model where it is no longer clear that the TV is the centre of digital attention even when people are sat around possibly watching it. The behaviours known as 'meshing' and 'stacking' describe how may behave. Some will 'mesh', in that the other devices in their hands - mobile phone, tablet etc. - work in conjunction with the broadcast content eg voting on reality TV programmes or tweeting along to political knockabouts. However, according to Ofcom, more will 'stack' their digital activities, in that they are doing other things with those devices that are unrelated to the broadcast TV content. The TV looks destined to be a peripheral, smart or not.


The next topic of growing interest is wearable technology and a particular area of wearable real estate, the wrist. Several companies are throwing a lot of effort into this space for essentially a companion device to a mobile phone, but is this misplaced?


It depends on the function of the wrist device, which seems to fall into two camps; one is the smart watch - a remote control or ancillary screen for the mobile phone; the other is more of a data capture device often for health and fitness. These devices tend to be wristbands rather than watches, although some have rudimentary displays. Their purpose is to gather data and feed it, typically via a mobile phone or when docked via a fixed device, to the cloud. This might mean they are not 'peripherals', but neither do they offer full functional mobile communications for the wearer.


In the other case, the smart watch offers a lot of the functionality previously delivered by the Bluetooth headset, but shedding the now less desirable appearance. To this it adds the 'Dick Tracey' geek appeal of a digital watch, but essentially is still a subservient peripheral to the prime communications device, the mobile phone.


These devices beg the question 'why would I use one if it is easy to get out the phone that I'm already carrying?' Here might lie the answer for certain device ecosystems where the phones are becoming so large they are usurping tablets - the 'phablet' fanciers - and so remain stowed away with urgent functionality accessed from the wrist. This is the model that is being tried by several of the Android device makers such as Samsung, but not yet by Apple.


Perhaps a better approach might be to shrink the cellular 'phone' to a wristband and carry a small or large display that can connect to it depending on circumstances, or simply wear the display as a head up image on glasses? Google's first iteration of Glass might be geeky and pose some sartorial challenges today, but once competition and the fashion industry takes over, who knows?

Enhanced by Zemanta

Planning your next data centre

Clive Longbottom | No Comments
| More

Virtualisation, increased equipment densities, cloud computing, all are conspiring to make life harder for any IT manager looking at where to go when it comes to the next data centre facility.  After a prolonged period of economic doldrums, many data centres are beginning to show their age.  Cooling systems are struggling; uninterruptable power supplies (UPSs) can only support a proportion of the IT equipment should the power fail; disaster recovery plans are no longer fit for purpose.  For many organisations it is time for an urgent review - something has to be done; but what?

Will it be possible to just move things around a bit in an existing facility?  Doubtful - power distribution and cooling systems will need to be completely changed to meet the requirements of today's highly dense equipment.  How about a new build?  OK - but should this be planned for ongoing expansion, or should it be for further contraction as some functionality moves from existing infrastructure out to the public cloud?

A direction that more companies are taking is a move to a co-location facility.  These facilities are built and managed by an external company, but the IT equipment and how it runs remains your responsibility.  Space can generally be rented flexibly: it can grow or shrink as needed.  The facility owner has the responsibility for keeping all the peripheral systems up to date: power distribution; uninterruptable power supplies and auxiliary generating; cooling and connectivity provision to and from the data centre.

However, like most things out there, choosing a co-location provider is not just a case of going to a web listing and choosing the first name that sticks out.  There are a lot of cowboy operators out there along with ones who are offering good deals based on best efforts results.  Can your business afford to depend on such cowboys or promises?

Quocirca has seen some organisation go for co-location as a purely cost-saving exercise.  Like pretty much any activity where the aim is just cost-saving, it can end up costing an organisation heavily when things do not turn out as was hoped.  However, Quocirca does find that when a choice is made based on the right reasons - for example, that the chosen direction is something that the organisation could not do directly itself; or that the chosen supplier has expertise that would be difficult or even impossible for the organisation to source and maintain itself - then the end result is generally very cost effective.

The use of co-location should be a strategically-planned activity.  Due diligence is a necessity - yet for those where it is the first time of looking at such an approach, it is difficult to know what questions need to be asked and what responses should be required.

To help those who are interested in looking at co-location, Quocirca has developed a set of questions and reasons why these are important in conjunction with Datum Datacentres Ltd.  The paper is downloadable free of charge here: http://www.datum.co.uk/data-centre-co-location-wp/

Enhanced by Zemanta

Masters of Machines: turning machine data in to operational intelligence

Bob Tarzey | No Comments
| More

700 million, that's a sizeable number; 2 billion is bigger still. The first is an estimate of the number of items of machine data generated by the commercial transactions undertaken in the average European enterprise during a year. The second figure is the equivalent for a telco. IT generated machine data is all the background information that gets generated by the systems that drive such transactions; database logs, network flow data, web click stream data and so on.

Such data, enriched with more data from other sources, is a potential gold mine. How to use all this data effectively and turning it into operational intelligence is the subject of a new Quocirca research report, Master of Machines. The report shows the extent to which European organisations are turning machine data into operational intelligence.

The numbers involved are big, so processing machine involves volume. In fact it fits all the 5 Vs definition of big data well. V for volume as described above, another v is for variety the range of sources, with their wide variety of formats. If machine data can be used in near real time it gives, v for velocity; it can add lots of v for value to operational decision making. All of which gets an organisation closer to the truth about what is happening behind the scenes on their IT systems; v for veracity, machine data is what it is; you cannot hide from the facts the mining it can expose.

Typically, operational intelligence has been used by IT departments to search and investigate what is going on their IT systems; over 80% already use it in this way. More advanced organisations use the data for proactive monitoring of their IT systems, some providing levels of operational visibility that were not possible before. The most advanced are providing real-time business insights derived from machine data.

To provide commercial insight, the most advanced users of operational intelligence are making it available beyond IT management. 85% of businesses provide a view to IT managers, whereas only 62% currently get a view through to board level execs. In both cases, many recognise a need to improve the view provided. 91% of the most advanced users of operational intelligence are providing a board level view compared to just 3% of the least advanced.

Although there is broad agreement around the value of operational intelligence and the need to open it up to a wide range of management, most are relying on tools that are not designed for the job. These include traditional business intelligence tools and spreadsheets; the latter were certainly not designed to process billions of items of data from a multitude of sources. 27% say they are using purpose built tools for processing machine data to provide operational intelligence. The organisations using such tools gather more data in the first place and will find it easier to share it in a meaningful manner across their organisation.

Quocirca's research was commissioned by Splunk, which provides a software platform for real-time operational Intelligence. Quocirca and Splunk will be discussing the report and its findings at a webinar on April 3rd 2014. Find out more and register HERE.

 

The mobile printing challenge

Louella Fernandes | No Comments
| More

Mobile devices are transforming business productivity. For many, the workplace is no longer defined by the traditional office; employees are now accessing corporate applications, data and services from multiple devices and locations every day. With a highly mobile workforce, organisations need to ensure employees have the same access to corporate applications as they would from the desktop, while protecting sensitive data. One area in need of better control, which has yet to catch up with the desktop experience, is printing.

Most businesses are reliant on printing to a certain extent, and although print volumes are flat to declining, there is still a need to provide easy access to printing for mobile workers. This could be simply being able to send a print job wirelessly to an office printer from a smartphone, sending a print job in advance to an office printer while on the road or allowing guest visitors to print securely from their mobile devices.

Whilst the explosion in the variety of smartphones and tablets used in the workplace is boosting productivity, enabling mobile printing across multiple platforms and printers can prove a real IT headache. Mobility has shifted control of IT from the IT department to the users. In the past the IT department (IT) would usually have complete control of the print infrastructure, managing the installation and deployment of printer drivers. Now, users may be installing their own printer apps, without IT's knowledge and often expecting support for mobile printing from previously unsupported devices. Consequently IT is grasping for any available options that ensure mobile printing is controlled, reliable and secure.

Essentially, there are several ways to print directly from a smartphone or tablet device:

  • Integrated Mobile OS support. This native printing capability most closely matches the "File > Print" Windows desktop user experience. Apple's AirPrint, for instance, is built-in to the OS, making it easy to print to a supported printer or MFP.  Although Airprint is a good tool for local network printing, Bonjour, Apple's automatic printer discovery technology is normally confined to a single subnet, so does not discover printers across broader networks. Various products, including Breezy, PrinterOn and EFI PrintMe Mobile offer automatic printer discovery for Airprint as well as for the Android platform, via an app.
  • Email Attachment. This is a basic approach of sending a document attachment, for instance as a PDF, JPG, TIFF, BMP or Microsoft Office file, to an email address associated with a specific printer/MFP or print queue. While this works for any mobile OS, most mobile printing solutions lack controls for printing options such as number of pages, duplex, colour and multiple copies. Unless integrated with a print management application, there is no way of tracking print usage via this approach.
  • Mobile Print Apps. Many print vendors have their own printer apps which allow direct printing to compatible printers or MFPs on a wireless local-area network.  Mobile print apps can also take full advantage of printer options, so offer more control than printing via an email attachment.
  • Google CloudPrint: enables printing the over the web via Gmail and Google Docs to supported Google Cloud Print Ready printers.  In addition, EFI PrintMe Mobile now offers a Chrome extension that allows direct Wi-Fi printing from Google Docs to any MFP. As above, in order to track and secure printing via a mobile app, integration with a print management tool is necessary.

In response to the lack of standards around mobile printing, the Mopria alliance was established in late 2013 by Canon, HP, Samsung and Xerox. In February 2014, other vendors including Adobe, Brother, Epson and Konica Minolta also joined the alliance. Mopria aims to align standards that make printing compatible from any mobile devices to any printer. Initially support is focused on sending print jobs over Wi-Fi connections or "tap-to-print" through near-field communications (NFC). Conspicuous by its absence currently is Apple, which has bypassed NFC in its new iPhones, in favour of iBeacon technology, which is based on Bluetooth Low Energy (BLE) and has a much longer range than NFC (tens of metres versus a tenth of a metre).

While most printer manufacturers offer a range of solutions based on the above approaches, third party solutions have emerged that offer a one-size-fits all approach across mobile platforms and printer brands. These include EFI PrintMe Mobile, EveryonePrint and PrinterOn. Given these diverse choices, businesses need to carefully evaluate the available options and determine which features and benefits are important. For instance while smaller businesses with a standardised printer fleet may find mobile print apps sufficient for their needs, larger businesses with a mixed fleet (both mobile OS and MFPs) should consider integration with brand agnostic secure printing solutions.

Secure printing, through third party products, which include Nuance Equitrac or Ringdale FollowMe, are an effective approach for larger mixed fleet environments.  When a user prints, the job is held in a server queue until it is released at the printer or MFP after following user authentication (ID badge, or username and password).  This offers a range of benefits, including an audit trail of what is being printed and eliminating paper waste as documents are not left uncollected in output trays which in turn reduces the chance for sensitive documents to be picked up by the wrong recipient.

With many IT departments already stretched, they may struggle to keep up with the demand for supporting printing across new types of mobile devices that are introduced, not to mention the new wave of connected smart MFPs. Many businesses are turning to managed print service (MPS) providers to handle the management of their print infrastructure. More MPS contracts are now encompassing mobile print and handing over this to an expert third party can minimise the drain on IT resources that mobile print could incur.

The IT department cannot afford a half-hearted mobile print strategy. With the right approach, mobile productivity can be boosted while security risks managed.  With BYOD showing no signs of abating, businesses need to act fast and get smart about managing and securing mobile print.

Enhanced by Zemanta

Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.

Archives

Recent Comments

  • David Chassels: Hi Clive Is the business emphasis not wrong in looking read more
  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more

Dilbert

 

 

-- Advertisement --