Stepping up to the cloud

Rob Bamforth | No Comments
| More

The concept of services provided from 'out there' over the network is often portrayed as what cloud is all about. Indeed, the commonly accepted definition of cloud is that it offers infrastructure, platform or software as a service. It might be a public cloud provider, that is offering services to anyone, an internal service from the organisation's own data centre or a hybrid of the both. There might often be reasons to start out with exclusively either public or private cloud, but the pragmatic approach will generally be balanced between the two, with an accompanying agility to allow for the movement of workloads.

A key reason behind cloud is flexibility. With a virtualised core, any operating platform or system can be provided on demand with a pay-as-you-use public commercial offering or as an adaptable base for a private cloud model.

It is this lowering of the total cost of ownership that is often an important driver for those considering adopting cloud. However cost saving alone is rarely the whole picture, as organisations are also looking to drive efficiency and new working practices and here cloud at the core is complementary to the adoption of mobile at the edge.

There are drivers offering opportunities to move into new services, such as extending the access to core infrastructure to external users, again often for business efficiency, or gaining access to new markets sectors as well as adding functionality or extending applications.

The cloud model also permits experimentation, not only for technical proof of concepts, but also business trials to for example extend into new territories or markets or try out alternative business models. This is a prime case for public cloud, to minimise impact on internal resources.

It might seem that a public cloud model has an impressively low upfront commitment or investment in that leads to several strong use cases in addition to the ability to trial or experiment with new ideas. For example, being able to deal with planned and unplanned peaks or dips in capacity as well as offering lower cost redundancy and software testing. Taking advantage of this will however require investment in existing internal systems to architect them to take full advantage of the external resources on offer.

Many organisations will have known peak times at year or quarter end or at particular stages. Owning the extra capacity required to handle this will rarely be cost effective and so the ability to call upon public cloud resources is very valuable. Unplanned changes in demand could occur on the back of an unexpected success which required a rapid ramp up of capability, perhaps only for a short duration.

Rare use of additional capability is a reason why on demand public cloud services are ideal for a failover platform. Again, owning and maintaining an entire system for business continuity reasons is an unnecessary expense if use can be made of a public cloud provider and only scaled up when actually required in the event of a failure of primary systems.

Private cloud on the other hand is sometimes viewed more sceptically from a cost benefit perspective, but it is here that investment in architecting a cloud-like core and transforming the data centre can also pay dividends over time. Not only is it a pre-requisite foundation for the fully flexible hybrid cloud model where workloads can be moved from private to public data centre capacity as demand arises, but it also encourages a more effective internal IT infrastructure capitalising on server virtualisation and therefore more efficient pooling of resources.

Putting aside sufficient resources and finding the right skills is difficult with any technology investment, and cloud is no different and it is these resourcing issues that often hold back cloud projects, rather than security per se.

Even those with an enthusiastic attitude to cloud adoption recognise this; despite believing they have more of the skills, they understand there is a need for significant investment in resources to make safe and secure use of the potential that cloud brings.

The cloud model of service delivery to an increasingly diverse and often mobile edge offers huge flexibility and agility advantages in both public and private capabilities to support workloads. As with mobile, this is a difficult task to perform incrementally since it requires investment and a re-architecting at the heart of IT, but with the right strategy it ultimately reduces both costs and risks which delivering greater capability for IT to provide increased value to the business.

There are often incremental improvements as technology advances - the oft-quoted marketing mantra of smaller, faster, more. However, every so often there are bigger changes that involve a different structure or way of thinking about what has been built before. There may be a need to rip up old systems and throw some things out - although with planned migrations this can be minimised - and there will certainly be a need to invest in something new. 

Rather than always trying to take this route in a gradual fashion, some innovations like cloud imply a step change in thinking in order to take full advantage of the opportunities offered. The diversity of access to IT in an increasingly small, smart and mobile fashion, coupled with a cloud-based core using flexible service provision is one such instance where change is significant and needs significant attention and investment to maximise its value.

For some thoughts on how to finance the changes required to address an enterprise cloud strategy as a whole, download this free report.


Avoiding a piecemeal mobile strategy

Rob Bamforth | No Comments
| More

It is easy to see that world is 'going mobile', from smartphones and tablets to radical innovations such as wearable technologies and the highly connected internet of things. The impact on consumers is wide-ranging and fast-changing, but despite this, some businesses seem to think that this is a phenomena that they can take their time to evaluate further to see how it 'pans out'. Or their IT departments think that it will be ok to slowly edge towards decisions by perhaps dealing with those shouting loudest (often senior executives) first.

This would be a mistake.

While it is true to say that early mobile adoption was often the domain of the 'pink collar' executives (so termed because they might buy their shirts in Thomas Pinks in London or New York) with devices such as the business-like BlackBerry, mobile usage, acceptance and even eagerness has spread to all job roles through a multitude of desirable smartphones and tablets from Apple, Samsung and others.

This led to the trend of bring your own device (BYOD) among many employees, but rather than reducing company expenditure by eliminating the potential need for the organisation to purchase the hardware, it transfers investment demands into other areas, particularly IT management and security.

In the previous deployments of mobile technology, select individuals could be given, for example, a corporate laptop and costs could be foreseen and planned by making incremental increases in the number of employees to whom laptops were deployed.

In the modern mobile world the challenge rapidly surges to include all employees and all devices. Some organisations are trying to contain the growth of mobile access, but employees are aware of technology options through their consumer use and think that the technology they have at home is often better than in the workplace. Clearly this is not efficient or effective, so organisations need to embrace the mass adoption of mobile and apply the right resources to make it safe, secure and sustainable.

Acceptance of user choice, either by permitting a subset of BYO devices or by the organisation buying devices more in line with employee preferences essentially means that almost any type of device will need to be supported and managed. This means investment in a mobile device management (MDM) system, ideally closely related to desktop management tools, but with further capabilities to deal with mobile specific issues such as cellular and public Wi-Fi network access and airtime contracts.

The risk of loss or theft of mobile devices is not insignificant, although somewhat reduced when personal choices are exercised or the device belongs to the individual, and although insurance can cover hardware costs, loss of working time and data is more significant. A suitably sophisticated MDM will make it simple to not only apply some uniform protection and configuration controls to the complete fleet of devices, but also to quickly re-instate a new device with the setup of one that has been lost or stolen.

However just protecting the hardware is no longer sufficient, especially as employees expect to be able to use mobile devices for personal as well as business purposes. Organisations must also take a strong interest in mobile applications and apply suitable levels of security to both what apps are on the devices and what corporate applications and data can be created, used or accessed on the move.

It might be that the best way to serve a large fleet of mixed roles of mobile users is to put in place a corporate app store and have some way to allow employees to self-provision access to central services and applications, with appropriate controls to ensure that only those employees who are permitted certain applications are allowed them.

To get enough future flexibility to bring the best out of mobile working, this type of strategy and support model needs to be put in place today, rather than enduring the chipping away of IT support resources by the ad hoc creep of a diversity of mobile devices, users and applications.

Getting it right might require more investment upfront, but it will save time and money over time and ensure that employees are as productive as possible from the outset.

In addition to controlling who has access to what apps and on what devices, the vulnerability and integrity of data accessed and used on the move needs to be assessed and securely managed.

This is the issue most likely to be keeping IT managers awake at night, and no wonder. There have been plenty of high profile losses of data, and mobile makes the task of data security much harder. There are software tools that will protect against data loss and leakage, as well as applying digital rights management. In addition to the technology, this also requires one vital ingredient that is frequently overlooked - training. All too often only simple 'how to use' training is put in place, but mobile technologies encourage such a dynamic shift in work practices that employees would benefit greatly from coaching as to how to safely and securely get the best out the tools at their disposal.

Effective mobility benefits from a wide strategy that encompasses productivity and training. It is thus not a piecemeal approach, but all embracing. It doesn't necessarily mean providing everyone with the latest gadget from California, but it does mean having a way to cope with managing a portfolio of technologies, dealing with complexity, and requiring a step change from thinking small (focusing on the devices) to thinking big (how it changes the business).

Overall, mobile brings huge benefits but significant changes to organisations and the old model of small proof of concepts and slow rollout is no longer valid. Employees are well aware of what technology is available and want to participate in selecting what works best for them as individuals. However, the collective needs of the organisation mean that controls need to be put in place, and IT departments needs a strategy for the safe management of devices, apps and most critically data used by employees on the move.

For some thoughts on how to finance the changes required to address an enterprise mobile strategy as a whole, download this free report.


Samsung's bold ambitions to transform office print

Louella Fernandes | No Comments
| More

At its recent press event in Monaco, Samsung outlined its latest plans to expand its foothold in the enterprise printing market. As a company, Samsung is already in the midst of looking for new revenue streams for growth, as focus shifts from its consumer business and it looks to better serve larger businesses and deepen its relationship with the enterprise customer. 

In the printer market, Samsung is banking on its new "Office Transformer" A3 multi-function printer MX7 as key to establishing a stronger presence in the enterprise, offering a credible alternative to established competitors such as Canon, Xerox and Ricoh.

Samsung's target is to be a tier 1 manufacturers of A3 MFPs by 2017. This is a bold ambition given the competitive space - however, its OEM agreements with vendors such as Xerox mean that as a manufacturer it already has a strong presence in the market - its aim is to now further develop its presence in the enterprise market under the Samsung brand.

The MX7 is designed for heavy-duty work environments and is capable of printing up to 300,000 pages a month, and works with large toner cartridges that give users up to 30,000 pages of colour or 45,000 pages in mono. Other add-ons include an automated document stapler capable of stapling 60 sheets of paper or an 80-page booklet.

In particular, Samsung is targeting high-print volume markets such as professional service organisations, financial institutions and government organisations with its latest MX7 series products. Certainly, the Samsung MX7 series has strong credentials to help propel Samsung into the enterprise space.

 Highlights include:

  • Fast processing power. Today, the Samsung MX7 is the only A3 MFP on the market powered by a quad-core CPU. The 1.5 GHz quad-core CPU enables faster processing speeds than a dual-core CPU.  This is combined with 1,200 dpi high resolution quality output. Along with a dual-scan document feeder which gives up to 120 single-sided images per minute, and up to 240 double-sided images, these products are an ideal choice for high speed document capture.
  • Smart User Interface. Samsung X7600 MFPs also boast the industry's first Android-based printer user interface. The Samsung SMART UX Center 2.0 functions just like a tablet with a touch-to-print display screen that can be pivoted to get true document views.
  • Downloadable Apps. Samsung's new Printing App Center enables users to set up printers by downloading essential apps from the app centre's web portals. This includes the Workbook Composer which gives users the ability to crop desired content and scan and save without the need for a PC.
  • Secure mobile printing. Samsung Cloud Print uses Samsung's private cloud which can be enhanced with its mobile device management (MDM) solution for full integration with enterprise mobility. Also offered is a wireless option with active near field communication (NFC) which enables printing, scanning or faxing of documents from any NFC-supported mobile device.
  • Customised solutions. Samsung's eXtensible Open Architecture (XOA) provides customised enterprise solutions integrated with its MFPs, such as output management, document security and document management solutions.
  • Samsung Smart Service. A key differentiator is Samsung's Smart Printer Diagnostic System (SPDS) which aims to reduce time spent on maintenance. SPDS is a mobile application which provides a technical service system for service engineers. It can also guide others with little technical knowledge and experience to fix a printer issue without the need to call the technical support engineer and incur cost.

Quocirca believes that Samsung is well positioned to grow its presence in the enterprise space with its latest models. Its sweet spot is likely to be the entry level space rather than competing head to head with its more entrenched competitors.  Samsung is wisely focusing on expanding its solutions and services capabilities to gain further traction in the enterprise market.

 Although this is not the first time Samsung has talked of a move to grow its enterprise presence, it now seems more energised with some real focus and clear strategies to achieve its goals. Perhaps the biggest challenge is to be seen as a credible player in the managed print services (MPS) space where it is late to the market. Here, Samsung needs to move quickly to establish a presence, and will need to leverage partnerships rather than building an infrastructure from the ground up. Certainly MPS could be its strongest weapon to grab more share in the coveted enterprise market.

Getting value from video conferencing

Rob Bamforth | No Comments
| More

Despite advances in technology often bringing business costs down, IT investment always requires justification. With communications in particular, the challenge is tougher as there are knock on costs, such as further investment being required in infrastructure to support the changes or significant impact on user behaviour that requires training and perhaps updated HR policies.

Video conferencing is a case in point. Businesses may well believe that the value not only from reducing travel or benefitting the environment, but also from improved productivity and responsiveness to customers, is worth it. But they will still need to be sure that they are taking the right investment decisions, especially when they start out on a new installation.

Quocirca's 2014 worldwide research project surveying over 800 current business video conferencing users, makes it clear that while most companies believe that they have been getting good value from their investment in video, it still has to be regularly justified.  In an age where many believe consumer technology is 'good enough', making this justification at the start of the project is even harder. I asked Roger Farnsworth, a senior director of services from Polycom, the sponsors of the research, what he hears about the value of video conferencing on a daily basis from talking to those who are starting out on the journey.


Rob: Video collaboration solutions are expensive compared to some well-known free tools - where does the extra value come from?

Roger: Generally it boils down to three things - quality, security and choice. Most organisations wouldn't consider free security tools or phone systems, and it's for the same reason that they should invest in a video conferencing system. The quality of free, web-based software is often inferior to the full HD video you get from a specialist like Polycom.  Investment in a specialised, more comprehensive solution delivers better audio and video quality that enhances the host company's brand.

Compliance is also a major consideration for enterprises; many are legally obliged to conform to data protection and privacy regulations. Paid-for systems aid them in this.

A dedicated video collaboration solution also allows for better integration into your specific workflows.  This is partly because of its integration with standard enterprise tools such as Microsoft Lync and also because it can be customised to suit your specific needs.


According to the research, the quality of the overall experience is an important factor for boosting adoption of video, and thus gaining greater overall benefits. Some of this was expected to come from having a more reliable system and improving infrastructure such as network availability, but higher definition video was also seen as important.  Video experiences do not all have to be high end immersive telepresence, but decent quality does play a significant part in making employees more comfortable with using video.

Many employees will have experienced some challenges using early video systems or will have heard stories about problems in the past from colleagues. In an organisation that is either installing video for the first time, or extending existing systems to be used more widely, this 'video folklore' or perception of problems will not help adoption.

When Quocirca dug deeper into the research and talked directly to installers of video systems, it became clear that many are not doing enough after making the purchase decision to get the best out of their installation. This is not helpful and can result in reinforcing negative perceptions about using video in the workplace, or denting the confidence of employees so that they only use video conferencing if there is someone on hand to provide assistance or set up the communications for them.


Rob: What can be done to ensure new video collaboration customers get off to an effective start?

Roger: There are several simple steps that an organisation can follow to ensure the smoothest possible roll out of video collaboration. The most important is thinking how video is actually going to address the business challenges and needs and then anticipating how it will fit in to the end users' daily routine. Video that is integrated into workflows will be much more rapidly adopted than a system that doesn't seem contextually relevant.

The second step is to prepare end users for what's coming to make sure they are comfortable with process and ready to engage.  Think about the user profile and pick the methods best suited to them. For example, your digital natives and millennials will be happy to watch YouTube videos and tweet their questions to your support desk, but baby boomers might prefer a more personal and formal approach such as webinars, online tutorials and physical workshops.  It's key that the users know what to expect and do not become concerned or nervous about this being a tool for them to use in the future.

Lastly; remember you only get one chance to make a first impression. Users should find collaboration tools easy to use wherever they are working. People who have an experience that is simple, with clear menu options and error codes, quick and reliable connections, and who get a satisfactory audio and video experience the first time they try are much more likely to become return users. Ensure that your users have a positive and quality experience first time and every time.


It is quite easy to look at consumer usage of video conferencing and think it will translate directly into straightforward use in the workplace, but this is rarely the case. While regular consumer usage builds awareness and familiarity, it is not sufficient for the rigorous challenges of the workplace. Things do not only need to be easy to use, they have to be reliable and build confidence that they will portray a professional image.

Partly this is down to the conferencing and collaboration tools and how well the infrastructure supports them as well as how conducive the overall workplace is for video use. Some of these factors are environmental and need to be put in place to provide the right settings, easy mechanisms for establishing calls and so on. However, some factors are personal. Pro-active training and facilitation from the outset, will help establish confidence and this can be further developed with increasing awareness of the value and management commitment to video usage - fostering a positive culture of video adoption.

It is a significant investment, so it would seem foolish to do anything other than take it seriously and ensure that everybody in the organisation gets the best out of it. To read more about video adoption, download this free report.


Should everybody be on video?

Rob Bamforth | No Comments
| More

Video conferencing has many times been presented as THE solution for many business communications challenges, and yet beyond walnut veneer boardrooms and despite the wider usage of video in personal communications, business adoption often seems tantalisingly muted.

Having embarked on a worldwide research project surveying over 800 current business video conferencing users, several usage patterns emerged, but some burning questions remained, which I put to Roger Farnsworth a senior director of services from Polycom, the sponsors of the research.

The first question concerns the matter of how many working hours are taken up with what seems to be endless pointless meetings - a point which the vendor agrees on and suggests how video could change this working pattern


Rob: Many employees feel that they have too many meetings already - isn't video collaboration just a way to hold meetings remotely?

Roger: "It's not that employees have too many meetings; that's a function of business culture. Organisations still have to be smart about time management; however; video collaboration can make necessary meetings more productive. In the UK alone time wasted being unproductive in meetings is estimated to cost the economy £26 billion every year. That's because of the 4 hours the average worker spends in meetings a week, 2 hours 39 minutes of this time is wasted. This is down to travel time, waiting for rooms when the previous meeting runs over, waiting for latecomers etc.  Workers can be more productive when they don't have to physically go to a meeting room and wait for a meeting to start. When dialling into a meeting room from your desk you can continue to work right up until the moment the meeting starts.

Video as a medium also speeds up the meeting process. Essentially, meetings are a way to reach consensus on issues and make decisions. In our recent research, more than 80 percent of those using video collaboration said they experience faster decision making. The ability to launch a group video collaboration anytime, anywhere means no more long, convoluted email trails as a preamble to a lengthy meeting. And of course both remote and external participants can join easily, so that the group can be effective and efficient. Video collaboration promotes smarter and faster decision-making."


It is clear that for video to effectively change the way people work, share information and make decisions to be more efficient, more people, in fact pretty much all employees would need to be using it. The reality is that in many organisations, video conferencing usage exists only in pockets; either the walnut veneered boardroom, certain team meeting rooms or on privileged desktops. This seems oddly restrictive when so many have become so accustomed to advanced communications, including video, as consumers.

However, the research also indicated that some organisations had a much more progressive attitude than others. In these, video conferencing usage had become accepted, normalised like using the phone and very widely adopted. So what makes them different?


Rob: What do you think are the characteristics of an adoptive video culture?

Roger: "Organisations with a high percentage of digital natives and millennials will see a video culture develop rapidly. This is because these workers are more used to using video in their personal lives, with consumer solutions such as Skype and FaceTime.

However, there are other key factors. Organisations that are constantly revisiting process and policy in pursuit of improvement adapt and evolve more quickly. Those organisations where IT is a more active participant at C-level will see video collaboration integrated into business processes and therefore adopted quickly too. Having IT advise lines of business leaders on integration of video collaboration drives adoption from the top down.

In order to foster a bottom-up movement in terms of video adoption it's important to develop a more democratic work environment where employees feel empowered to run with the tools provided. This means making video for all, not just managers. Dissolving hierarchical access limitations is absolutely essential."


The research backs up these comments. The video conferencing industry has progressed through several stages of evolution, with the perceptions from some earlier hang-ups lingering a little longer than necessary. The technology needed to mature to become easier to use and more reliable, and networks needed to grow in capacity to support higher definition video. This has largely happened, with perhaps the odd rough edges in usability still needing some polish.

The next steps involve non-technical challenges such as social, psychological and political (the internal politics of management) acceptance. These influence the culture in the workplace and attitudes to how people communication. Getting them right will bring greater adoption and should lead to the intended goal - more effective communication. To read more about video adoption, download this free report.



DevOps and the IT platform

Clive Longbottom | No Comments
| More

Historically, the development team has had a bit of a two-edged sword when it comes to their development environment.  It has tended to be separate to the production environment, so they can do whatever they want without any risk to operational systems.  The network tends to have been pretty self-enclosed as well, so they get super-fast speeds while they are working.  However, those positives are also negatives as they then find what had worked so blazingly fast in the development environment fails in the user experience stakes in the production environment due to slower servers, storage and networks.

Database.jpgOn top of that, the development infrastructure has its own issues.  Provisioning development environments can take a long time, even where golden images are being used for the base versions.  Tearing down these environments after a development cycle is not as easy as it can be.  Declarative systems, such as the open source Puppet, allow for scripts to be written that can set up environments in a more automated manner, but it still leaves a lot to be desired.

Physically configuring hardware and software environments, even with the help of automation like Puppet, still leaves the problem of getting hold of the right data.  Making physical copies of a database at a single point in time or taking subsets as a form of pseudodata does not address the central issue.  In neither case is the data a true reflection of the current real world production data - and the results from the development and test environments cannot, therefore, be guaranteed to be the same when it is pushed into production. 

Trying to continue with inconsistent data between development, test and production environments will be slow and costly.  Alongside the lack of having a development and test environment that reflects the real world, attempting to work around this involves taking full database production copies and regularly refreshing them which is a lengthy process and will also affect the performance of the operational network itself. Organisations are particularly struggling with continuous integration, where an application requires data from multiple production databases (e.g. Oracle, Sybase and SQLServer).  As developers move towards looking at big data for their organisation, the problem gets worse - now, multiple different databases and data types (for example, Hadoop, and noSQL sources alongside existing SQL sources) may need to be used at the same time - bringing these together as distinct copies across three different environments is just not viable.

Continuous development, integration and delivery require systems that are adaptable and are fast to set up and tear down. Existing approaches make agile development difficult, requiring cascade processes that take too much time and involve too many iterations to fulfil the business' needs for continuous delivery.

What is needed is an infrastructure that bridges that gap between the different environments.  Server and storage virtualisation does this at the hardware level, and virtual machines and container mechanisms such as Docker allow for fast stand up of development and test environments within the greater IT platform.  However, there still remains the issue of the data.

To create an effective data environment needs the capability to use fresh data - without impacting adversely on overall storage needs or in the time required to set up and tear down environments.  The common approach of database snap shots, clones or subset copies involves too many compromises and costs - a new approach of using an abstraction of the database into a virtual environment is needed.

The data virtualisation technology I've been looking at from start-up Delphix does just this. It can create a byte-for-byte full size virtual "copy" of a database in minutes, using near live data and requiring barely any extra storage.  The data created for development and testing can be refreshed or reset at any point and then deleted once the stage is over. Suddenly each developer or tester can have their own environment without any impact on infrastructure.

By embracing DevOps and data virtualisation, everyone wins.  Developers and testers get to spin up environments that exactly represent the real world; DBAs can spend more time on complex tasks that add distinct business value rather than creating routine copies.  Sysadmins don't have to struggle with trying to deal with the infrastructure to support multiple IT dev/test environments; network admins can sleep easy knowing that huge copies of data are no longer being copied back and forth and storage admins don't have to have their capacity drained by pointless copies of the same data.

More to the point, the business gets what it wants - fast, continuous delivery of new functionality enabling it to compete far more strongly in its market.  All without having to invest in large amounts of extra hardware and time - without the end result being guaranteed.


Disclosure: Delphix is a Quocirca client

How managed print services accelerates business process digitisation

Louella Fernandes | No Comments
| More

Despite the transition to a digitisation of paper workflows, many organisations are struggling to integrate paper and digital information. A recent Quocirca study amongst 210 organisations revealed that 40% of organisations plan to increase their spending on workflow automation, but that there is also much progress to be made - and it is primarily those that use a managed print service (MPS) that are most confident of their digitisation initiatives.

Organisations remain reliant on printing. Quocirca's research reveals that overall, 30% of organisations view printing as critical to their business processes - this rises to 73% for financial services, followed by 41% for the public sector.  Given the financial and environmental implications of a continued dependence on printing - not to mention the inherent inefficiencies with paper-based processes - 72% of organisations indicated that they are planning to increase their digitisation efforts, and many are using their MPS providers to support this transition.

Today, MPS has moved beyond the realms of hardware consolidation to encompass a much broader strategy to drive business efficiency around paper-dependent processes. With cost remaining a top driver for most MPS engagements, many MPS customers are seeing significant cost reductions not only through a rationalised printer fleet, but also through the implementation of solutions that reduce or eliminate wasteful printing and better integrate document workflows.

Certainly, MPS is proving an effective approach to the digitisation of business processes.  At a foundation level, this could be through harnessing the sophistication of today's advanced multifunction peripherals (MFPs) which enable documents to be scanned and routed directly to applications (think expense reporting or HR applications), minimising the need for multiple hard copies.

Beyond this, many leading MPS providers offer a range of business process services (BPS) as an extension of their MPS offerings. These services are increasingly sophisticated, and look at analysing existing business process workflows and for optimisation opportunities to reduce the paper burden, for instance in areas such as mortgage or loan origination, or accounts payable or receivable applications.
Despite the clear need to better integrate paper and digital workflows, Quocirca's study revealed that overall, only 29% of organisations believe they are effective or very effective at integrating paper and digital workflows. However, there is a stark difference between organisations using and not using MPS. While only 9% of organisations not using MPS rated their ability to integrate paper and digital workflows as effective or very effective, this rose to 51% for those using MPS.  Quocirca expects this figure to climb over the next year as more organisations move further along their MPS journey and begin the implementation of document workflow tools and business process optimisation.


So MPS is certainly making an impact on digitisation efforts, and confidence is most prevalent in the financial services and professional services sectors.  Due to legal and security needs, these organisations have made the most headway in eliminating or minimising their paper dependencies and tend to be the most mature in their adoption of MPS.  At the other end of the scale is the public sector - despite a huge dependence on paper, they are still behind the curve on digitisation efforts - reflecting the disparate nature of these organisations and a lower use of centralised MPS.


Quocirca recommends that businesses looking to better integrate their paper and digital business processes should look closely at the broader services that many leading MPS providers are now offering. Vendors such as HP, Lexmark, Ricoh and Xerox are all developing a range of solutions - some focused on enterprise content management (ECM), others on business process automation. While some businesses may consider business process optimisation as something to be implemented later in the MPS journey, it is increasingly paper dependent processes that are being analysed at the outset as part of the initial assessment service. This is because automating such processes can have a real impact on improving productivity, efficiency and cost reduction.

Businesses looking to start or extend their MPS journey should look for an MPS provider that can have a truly transformative business impact. MPS is no longer just about devices; it has the potential to help organisations focus on their core business and innovate and not be hindered by slow, manual paper based processes. This demands a new kind of MPS provider that can not only tame the complexity of the print infrastructure, but also has the expertise, resources and tools to accelerate paper to digital initiatives.

The path to continuous delivery

Clive Longbottom | No Comments
| More

What is it that a company wants from its IT capability?  High availability?  Fast performance?  The latest technology?

Hardly.  Although these may be artefacts of the technical platform that is implemented, what the company actually wants is a platform that adequately supports its business objectives.  The purpose of the business is to be successful - this means that its processes need to be effective and efficient.  Technology is merely what makes this possible.

The problem has been that historically the process has been 'owned' by the application.  The business had to fit the process to the application: flexibility was not easy.  Under the good times (remember those?), poorly performing processes could be hidden - profit was still being made; however, more could have been made with optimised processes.  As the bad times hit and customer expectations changed to reflect what they saw on their consumer technology platforms, poorly running processes became more visible, and the business people started to realise that things had to change.

In came the agile business - change had to be embraced and flexibility became king.  Such agility was fed through into IT via Agile project management - applications became aggregations of chunks of function which could be developed and deployed in weeks rather than months. However, something was still not quite right.

Continuous delivery from the business angle needs small incremental changes to be delivered on a regular basis.  Agile IT aims to do the same, but there are often problems in the development-to-operational stage. Sure, everything has been tested in the testing environment; sure, operations understand how to implement those changes in the run-time environment.  According to a survey by VersionOne, 85% of respondents stated that they had encountered failures in Agile projects - a main reason stated was that the company culture was at odds with an Agile approach.  No matter how agile the project methodology itself became, the impact of a wrong culture was far reaching: without changes in other areas of the business and in tooling used, the Agile process hit too many obstacles and the whole system would short-circuit.

DevOps has been touted as the best way to try and remove these problems - yet in many organisations where Quocirca has seen DevOps being embraced, the problems remain, or have changed to being different, but equally difficult ones.

The problems lie in many areas - some vendors have been re-defining DevOps to fit their existing portfolios into the market. Some users have been looking to DevOps as a silver bullet to solve all their time-to-capability problems without having to change the thought processes at a technical or business level.  However, DevOps is becoming a core part of an organisation's IT: research by CA identified that 70% of organisations have identified a strong need for such an approach. Business needs for customer/end user experience and dealing with mobility are seen as major requirements. 

Even at the basic level of DevOps creating a slicker process of getting new code into the production environment, there is a need to review existing processes and put in place the right checks and balances so that downstream negative impacts are minimised.  At the higher end of DevOps, where leading-edge companies are seeing it as a means of accelerating innovation and transformation, a whole new mind-set that crosses over the chasm between the business and IT is required - the very chasm that was stated as the main reason for failure of Agile projects by VersionOne.

Traditional server virtualisation offers some help in here in speeding up things like image deployment.  However, it only solves one part of the issue - if development is still air-locked in its own environment, then the new image still requires testing in the production environment before being made live.  This not only takes up time; it will still be running in a different way due to running against different data.  The proving of the system in production is not the same as the testing of the system in development: problems will still occur and iterations will slow down any chance of continuous delivery.

The issue is in the provisioning of data to the test systems.  Short sprint cycles, fast provisioning and tear down of environments in the production environment and a successful Agile culture requires on-demand near-live data for images to run against.  This is the major bottleneck to successful Agile and DevOps activities.  Only through the use of full, live data sets can real feedback be gained and the loop between development and operations be fully closed.  DevOps then becomes a core part of continuous delivery: IT becomes a major business enablement function.

Today's solution, taking snapshots of production databases is problematic: each copy takes up a large amount of space which can mean subsets end up being used, hoping that the data chosen is representative of the larger overall whole. The provisioning takes a lot of time through an overly manual process and typically the end result is that dev/test data is days, weeks or even months old.

DevOps requires a new type of virtualisation, going beyond the server and physical storage down into the data. Delphix, a US start-up has created an interesting new technology that I believe could finally unlock the real potential of Agile and DevOps - data virtualisation.  More on this in a later post - but worth looking further into Delphix as a company.

Disclosure: Delphix is a Quocirca client.

Video conferencing - why use it?

Rob Bamforth | No Comments
| More

What is it with video conferencing?

The technology has been around for decades; it's been seen as an inherent part of sci-fi on film and TV over a similar period; networks from fibre to 3G have been touted as being great for it; and yet it still doesn't appear to have made the transition from unusual to everyday.

Some of the fault lies with technology. Video conferencing was once cumbersome and difficult to use, which has engendered a persistent perception of users needing handholding. Differences between vendors and systems have led to stubborn interoperability issues, which even standards have struggled to completely eradicate. Plus, there are lingering inconsistencies between any single vendor's own systems as they make rapid product improvements in what is still a relatively dynamic sector.

There have been many technical advances in business video systems, but according to a recent worldwide survey, commissioned by Polycom, of over 800 existing business video conferencing users, over a quarter find video conferencing to be too complicated, and making it easier to use is the number one thing most believe would increase usage.

The consumer experience of video conferencing has evolved significantly too. While the marketing of video calling over 3G phones turned out to be a complete flop and even mighty Apple has not been able to switch everyone into a mobile video call with FaceTime, there is no doubt that video usage has become more popular elsewhere. The usage might not be regular 'calling' or 'conferencing' but through a combination of easy (and free) tools like Skype, cheap video cameras and YouTube uploads, more have become acclimatised to the use of video.

The quality of the experience might often be poorer than that of business video conferencing, but the user is comfortable with it, and this is critical to generating more widespread use of video for business. User comfort, or the lack of it, is a major reason that holds back the adoption of video conferencing. It has not yet become as natural a thing to do as making a phone call in the workplace.

Does there need to be more widespread business use of video? Yes, but the reasons are more complex than portrayed by early video conferencing solution marketing messages. Saving money by reducing the amount of business travel is certainly a prime driver for increasing the use of video. These are tangible savings, which although rarely actually measured by most organisations are at least directly attributable.

While they are positive, travel savings are generally insufficient to stimulate sufficient investment in video and it is here that the less tangible, but potentially far more valuable benefits, become more important. Part of the benefit in travel reduction is in reality saving time; travel time, of course, but also setup time, 'waiting for someone to respond' time and time spent afterwards trying to sort out what it was all about.

This can be far more critical than simply saving a business traveller from a tedious journey.

The NHS in Lancashire and Cumbria has implemented tele-health services using high definition video to normalise behavior, meaning patients feel comfortable and are able to easily connect at the touch of a button.  This approach has worked specifically very well for renal patients, reducing the need for hospital visits and allowing a large network of doctors to collaborate without scheduling or travel restrictions.

Removing wasted time not only makes individuals more efficient, it will also be speeding up the overall decision-making process and therefore customer responsiveness. These benefits are all harder to define and measure for a straightforward ROI calculation, but most people know they are there from the first time they picked up a phone to avoid spending time making a journey.

The thing about phone calls is they can only be of real value if the caller knows they can call someone wherever they need or want to, and knows the recipient will have a means of answering. i.e. ubiquitous communications. Adding video to re-introduce the non-verbal aspects back into remote communication seems a natural progression, but only if it touches everybody, equally.

There are many organisations that have already have some video conferencing systems, but with different levels of adoption. In some there are pockets of frequent or proficient users; it might be the main board, a team of engineers or a distributed marketing group. In others there are handfuls of systems that sit idle; meeting rooms used for other purposes, executive desktops that no one else is allowed to touch, or systems no one remembers quite how they work.

To encourage individuals to feel more comfortable with video in a business setting requires a shift in the attitude and culture of the organisation. Video needs to become a normal, everyday activity, used by everyone, wherever they are (any room, any device) whenever it is required. It needs to be instilled in an organisation from top to bottom and in an individual's working practices from day one.



It might feel like a bigger leap, but just like many other forms of communication - public speaking, using the phone, writing letters, document sharing - not only do the right sort of facilities need to be in place, but people need to feel comfortable to use them and to get the most out of them.

It takes practice, but with regular use, anyone can become an effective communicator in any medium, including video, and better communication builds better collaboration and ultimately a more efficient and effective business. For a more detailed look at cultures of video adoption, click here for a free report based on the worldwide survey of over 800 video conferencing users

Many attacks may still be random, security should not be

Bob Tarzey | No Comments
| More

With all the talk of targeted attacks, it easy to lose sight of the fact that for the majority of us, especially in our lives as consumers, random malware is still the greatest danger. Random malware is distributed en masse, by whatever means, in the hope it will find its way onto the most vulnerable of devices. A targeted attack on the other hand, means it is you and/or your organisation, which an attacker specifically wants to penetrate, however that might be achieved.

The best protection against random attacks is still regular patching and host-based anti-malware packages. That was the message from Kaspersky Labs at recent press round table. Of course, as a vendor of such products, Kaspersky was keen to remind all present that is was not time to ditch more traditional security capabilities just because you have now invested in state-of-the-art protection against targeted attacks. Quocirca agrees, having issued similar advice in a free 2013 research report 'The trouble heading for your business'.

If anything, the issue of random attacks is set to get worse. More devices, with more diverse systems software, often attached to public network access points, increases the attack surface, especially as mobile devices are used more and more for online banking and payments. This will mean random attacks are not quite as random as before, malware variants will be needed for different operating systems, browsers and apps (whereas in the old days it was Windows, Windows, Windows).

However, it should still be worth the cyber-criminals' effort as at present many mobile devices do not have anti-malware installed. Kaspersky says the focus has been on Android, but iOS users are becoming more and more of a target. Overall Kaspersky saw 7,000 new mobile malware samples in the first half of 2014.

There is also the potential for collateral damage. Although a mobile device user's personal, banking and/or payment card details may be the primary target, where data protection controls are not in place, business data may make its way on to personal devices too. This may also be compromised with the potential to land data controllers in regulatory deep water if PII (personally identifiable information) is involved. 

Security distributor Wickhill was also at the round table and pointed out that one of the problems resellers find is that too many organisations are still rolling out applications without giving up-front consideration to appropriate security. This is especially true of SMB's who see security as a cost not a benefit. Wickhill also finds that security is being overlooked with mobile deployments.

There was general agreement that security needed to focus on data itself rather than the rapidly dissolving network edge. This requires a holistic approach to security that applies to data wherever it is being transmitted or stored. Measures are need to control what access internal and external users have to data and what they can do with it, which was the subject of two free 2014 Quocirca reports What keeps your CEO up at night? and Neither here nor there?

Technology helps drive all this, but as Wickhill pointed out, education is also needed, both of users and the IT teams which deploy and manage the devices and applications they use. For the more lackadaisical SMBs, help is at hand. Many resellers, that are already trusted advisors to their customers, are adding managed security services to their portfolio.

Quocirca expects this will increase the uptake amongst SMBs of cloud services. This is now seen as the best way for many to acquire both infrastructure and security, as another free Oct 2014 Quocirca research report Online domain maturity shows. Kaspersky found that many early adopters of cloud services found security lacking, however, the Quocirca report shows that more recent adopters now see security as one of the main benefits of online services.

Random attacks may still be a problem to worry about, but there is no excuse for random security. The products and services are out there to make organisations, if not 100% safe, at least safer than many others. If you are targeted, you will have better chance of withstanding the onslaught, and random attacks should pass you by to trouble a weaker organisation. 

Securing virtual infrastructure

Bob Tarzey | No Comments
| More

When considering the security of virtual environments, it helps to point out where in the virtual stack the discussion is alluding to. There are two basic levels, the virtual platform itself and the virtual machines (VM) and associated applications deployed on such platforms. This is the first of two Quocirca blog posts aimed to provide some high level clarity regarding security in a virtual world, starting with the platform itself.


Virtual platforms can be privately owned or procured from cloud service providers. Those organisations that rely 100% on the use of public platforms or who outsource 100% of the management of their virtual and/or private cloud infrastructure need read little further through this first post. They have outsourced the responsibility for platform security to their provider and should refer to their service level agreement (SLA).


As Amazon Web Services (AWS) puts it: "AWS takes responsibly for securing its facilities, server infrastructure, network infrastructure and virtualisation infrastructure, whilst customers choose their operating environment, how it should be configured and set up its own security groups and access control lists".


The AWS statement points out the areas those deploying their own virtual platforms and private clouds need to address, to ensure base security. The risk is in three areas:

  •  Security of the virtualisation infrastructure (the hypervisor)
  • Security of the resources that the hypervisor allocates to VMs
  • Virtualisation management tools and the access rights they provide to the virtual infrastructure


The third point includes the use of cloud orchestration tools such as OpenStack and VMware's vCloud Director, which can be used for managing private clouds or moving VMs between compatible private and public clouds (hybrid cloud).


Hypervisor security

All hypervisors can, and do, contain errors in their software which lead to vulnerabilities which can be exploited by hackers. So, as with any software, there needs to be a rigorous patching regime for a given organisation's chosen hypervisor and the management tools that support it. That said, hypervisor vulnerabilities are of little use unless they open access either to the hypervisor's management environment or resources it has access to. Most press reports reflect this, for example, picking on the most widely used hypervisor, VMware's ESX:


ThreatPost Dec 2013 "VMware has patched a vulnerability in its ESX and ESXi hypervisors that could allow unauthorised local access to files", the article goes on the report that that the vulnerability has the effect of extending privilege, something hackers are always seeking.


Network World, Oct 2013: report on an ESX vulnerability "To exploit the vulnerability an attacker would have to intercept and modify management traffic. If successful, the hacker would compromise the hosted-VMDBs, which would lead to a denial of service for parts of the program".


In both cases, VMware went on to issue a patch ensuring that fast acting customers were protected before hackers had much time to act.


Security of resources allocated by hypervisors

Both of the above examples underline the need to address the basic security of underlying resources; networking, storage, access controls and so on. For those that do everything in house, that includes physical access to the data centre. The considerations are pretty much the same for non-virtual deployments with one big caveat. In the virtual world many of these resources are themselves software files that are easy to create, change and move, so compromise of a file server may provide access to more than just confidential data, it may allow the virtual environment itself to be manipulated.


Securing use of virtual management tools

As with all IT management there are two dangers here; the outsider finding their way in with privilege or the privileged insider who behaves carelessly or maliciously. A virtual administrator, however their privileges are obtained, can change the virtual environment as they see fit without needing physical access. That may include changing the configuration and/or security settings of virtual components and/or deploying unauthorised VMs for nefarious use.


When it comes to access control, the management of privilege, who has it, when they have it and auditing what they do with it is similar to that for physical environments. However, there are other considerations that apply in a virtual world over and above those in a physical one. Principally this is about being able to monitor hypervisor-level events; control and audit access to key files, the copying and movement of VMs, capturing hypervisor event streams and feeding all this to security information and event management (SIEM) tools. There is also the need to define hypervisor-level security and take actions when it is breached for example closing VMs or blocking traffic to and from VMs.


Specialist vendors

There are certain specialist vendors that are focussed purely on the security of virtual infrastructure layer. For example Catbird specialises in reporting on and controlling security of VMware-related deployments and GroundWork which focuses on monitoring data flows in open source-based virtual environments. The suppliers of virtual platforms and tools provide support too, not least access to urgent patching advice.


When many mainstream IT security vendors talk about virtual security they refer to the security of deploying VMs and associated applications. Security at this level is of course important to address and has its own special considerations which will be covered in the second blog post. For those that have outsourced the virtual platform and/or the management of it, and are confident in their supplier, the focus will already be at this higher level.

Think-again Tuesday?

Bob Tarzey | No Comments
| More

How did your web site stand up on Black-Friday and Cyber-Monday (Nov 28th and Dec 1st 2014)? These were expected to be the most frenetic online shopping days of the year. Whether you are an online retailer or processing the payments generated, if you were able to maintain a good customer experience and complete transactions on these busiest of days, hopefully the rest of the year was a cake walk!


Meeting the challenge requires a mature approach to managing your online presence as recent Quocirca research shows. The new report (see link at the end of this post) shows consumer-facing organisations to be more advanced in this regard than organisations that deal only with other businesses. They have to be; on average, consumer-facing organisations deal with three times as many registered users online as their non-consumer-facing counterparts. They also know that consumers are more impatient and capricious.


The report identifies seven things that consumer-facing organisations are more likely to be doing to rise to the online maturity challenge. Any organisation that underperformed on Black-Friday, Cyber-Monday or at any other time should follow their lead.


1: Monitor performance

Most organisations have some sort of capability to monitor the performance of their web sites and online applications. However, consumer-facing organisations are much more likely to be focussed on metrics to do with the user experience whilst their non-consumer-facing counter parts fret about bandwidth and system information. Consumer-facing organisations are able to do this because the platform basics are often outsourced.


2: Outsource infrastructure

Consumer-facing organisations free themselves to focus on delivering the applications and websites that are core to their business and avoid getting bogged down with infrastructure issues that are not. This includes the infrastructure on which their online resources are deployed as well as supporting services such as DNS management, content distribution and security. Indeed, a key finding of the new survey is that better security is now seen as one of the top benefits of cloud-based services.


3: Outsource security

Nearly all aspects of security were more likely to be outsourced by consumer-facing organisations.  This includes emergency DDoS protection, malware detection and blocking, advanced threat detection, security information and event management (SIEM) and fraud detection. The motivators for this are that applications and users are in the cloud, so the security needs to be too and, as with the base infrastructure, leaving security to experts further frees staff to focus directly on the user experience.


4: Deploy advanced security

It is not just that consumer-facing organisations are using cloud-based security, the protection they have in place is also more advanced. Non-consumer-facing organisations are more likely to rely on older technologies such as host-based malware protection and intrusion detection systems (IDS). Consumer-facing organisations have these capabilities too, but are much more likely to supplement them with state of the art advance security systems, be they outsourced or deployed in-house.


5: Take a granular approach

No two consumers are exactly the same; they will be using different devices, different browsers and have varying access speeds based on their network connection and geographic location. Consumer-facing organisations are more likely to monitor such things and adjust the way they respond to individual users accordingly.


6: Link the user experience metrics with business success

Having all sorts of capabilities to monitor the user experience is all well and good, but it is even more useful if it can be shown how variable delivery affects the business. Consumer-facing organisations are more likely to have a strong capability to do this, linking metrics to revenue and customer loyalty.


7: Find the budget to do all this

Of course putting all these capabilities in place has a cost. However, that is no barrier for the most forward thinking consumer-facing organisations; they are almost twice as likely to be increasing the budget for supporting online resources as their non-consumer-facing counterparts. Just throwing money at a problem is never an answer in its own right, but if the spending is well-focussed it can make real difference as those that coped best over the last few days will surely know.


Organisations that only deal with other businesses may say; 'what has all this got to do with us?' Well, as more and more digital natives enter the work place they will bring their consumer expectations and habits with them. All businesses need a razor-sharp focus on the online experience. For those that fail to do so, it will not just be Black-Friday and Cyber-Monday that they lose business; it will be every day of the year.


*The report was sponsored by Neustar (a supplier of online security and monitoring services) and is free to download at this this link:

Car ownership - a dying thing?

Clive Longbottom | No Comments
| More

At a recent BMC event, CEO and Chairman Bob Beauchamp stood on stage and gave a view on how the rise of the autonomous car could result in major changes in many different areas.

The argument went something along these lines - as individuals start to use autonomous cars, they see less value in the vehicle itself.  The "driving experience" disappears, and the vehicle is seen far more as a tool than a desirable object.  By using autonomous vehicles, congestion can be avoided, both through the vehicles adapting to driving conditions, accidents being avoided, areas where non-autonomous vehicles are causing problems being by-passed and so on. The experience becomes an analogue to SDN - the car's function can be seen as the data plane (it gets from point A to point B) is decided by a set of commands (control plane deciding what should happen) through commands issues by the management plane (what is the best way to get from point A to point B?).

It is then seen that the tool is not being used that much - for long periods of time, it is in the garage, drive or roadway doing nothing.  It needs to be insured; needs to be maintained - it becomes an issue, rather than a "must have".

Far better to just rent a vehicle as and when you need it - a "car as a service" approach means that you don't need to maintain the vehicle.  Insurance is a moot point - you aren't driving the vehicle anyway; it is the multiple computer "brains" that are doing so, working a full 360 degrees at computer speed, never getting tired; never failing to notice and extrapolate events going on around them.  Insurance is cheaper and only has to cover damage caused by e.g. vandalism and fire: theft is out, as the vehicle is autonomous anyway and can be tied in to a central controller.

Insurance companies struggle; car manufacturers have to move away from marketing based on seeing fast cars driving on deserted roads to selling to large centralised fleet managers who are only interested in overall lifetime cost of ownership.  Houses can change - no need for a garage or a drive and cities can change with less need of parking spaces.  More living space can be put in the same area - or more properties on the same plot of land. Autonomous driving means less time spent commuting; less frustration; less fuel being used up in stop-start traffic.

When Bob first said this, my immediate response was "it will never happen".  I like my car; I like the sense of personal ownership and the driving experience that I get - on an open road.

However, I then took more of an outside view of it.  Already, I have friends in large cities such as London who do not own a car.  They use public transport for a lot of their day-to-day needs, and where they need a vehicle, they hire one for a short period of time.  Whereas this may have been on a daily basis via Hertz or Avis in the past, newer companies such as City Car Club allow you rent a vehicle by the hour and pick it up from a designated parking bay close to you and drop it off in the same way wherever you want.  The rise of Uber as a callable taxicab company is also showing how more people want the ease of using a car, but not in owning the vehicle themselves.  These friends have no requirement for a flashy car badge or for the capability to get in "their" car and drive it at any time - in fact, the majority do not like driving at all, and would jump at the chance of using an autonomous vehicle, so removing this last issue for them.

As tech companies like Google improve their autonomous vehicles on a rapid basis, manufacturers such as Mercedes Benz, Ford and GM are having to respond.  Already, over fifty 500 tonne Caterpillar and Komatsu trucks are being used in Australia to move mining material, running truly autonomously in convoys across private roads in the outback, allowing 24x7 operations with lower safety issues. 

Just as the car manufacturers are coming out of a very bad period, they now stand a chance of being hit by new players in the market.  Elon Musk, of Tesla electric car fame, is a strong proponent of autonomous vehicles.  Amazon would like to take on Google, and it is likely that other high-tech companies will look to the Far East for help in building simple vehicles that can be used in urban situations via a central subscription model.

Sure, such a move to a predominantly autonomous vehicle model will take some time.  There will be dinosaurs such as myself who will fight to maintain ownership of a car that has to be manually driven.  There will be the need to show that the vehicle is truly autonomous; that it does not require continuous connectivity to a network to maintain a safe environment.  More companies such as City Car Club will need to be brought about, and suitable long-term business and technology models put in place to manage large car fleets and get them to customers rapidly and effectively without a need for massive acreage of space to store cars not being used.  Superfast recharging systems need to be more commonplace; these vehicles need to be able to recharge in minutes rather than hours, or to use replaceable battery packs.

Certainly, moving to the use of autonomous electronic vehicles where overall utilisation rates can be pushed above 60% would result in far less congestion in city centres and so in less pollution, less impact on citizens' health and less time wasted in the morning and evening rush hours. Indeed, Helsinki has set itself a target of zero private car ownership by 2025.

At the current rate of innovation and improvement in autonomous vehicles, it is becoming more of a "when" than an "if" as to when we will see a major change in car ownership.  The impact on existing companies involved in the car industry cannot be underestimated.  The need for improved technology and for technology vendors to work together to ensure that an autonomous future can and will happen is showing signs of being met.

The problem of buggy software components

Bob Tarzey | No Comments
| More

What do Heartbleed, Shellshock and Poodle all have in common? Well apart from being software vulnerabilities discovered in 2014, they were all found in pre-built software components, used by developers to speed-up the development of their own bespoke programs. Heartbleed was in OpenSSL (an open source toolkit for implementing secure access to web sites), Shellshock was in the UNIX Bash shell (which enables the running of UNIX operating system commands from programs), whilst Poodle was another SSL vulnerability.


Also common to all three is that they were given fancy names and well publicised. This is not a bad thing; it gives the press something to hang its hat on and gets the message out to software developers that a bug needs fixing. The time lag between zero day, when a vulnerability is first identified, and the bug being patched is the window of opportunity for hackers to exploit it. With Heartbleed in particular, there was also advice for the general public, to change their passwords for certain web sites that used the vulnerable version of OpenSSL.


However, these widely publicised bugs are just the tip of the iceberg, as data from HP's Security Research (HPSR) team reveals. HPSR uncovers software security flaws on behalf of its customers and the boarder community. Unlike the discoverers of Heartbleed, Shellshock and Poodle, HPSR does not seek publicity for all the flaws it hunts down via its Zero Day Initiative (ZDI) programme; not least because there are so many of them.


HPSR has a number of ways of seeking vulnerabilities out. Some it simply buys from white hat hackers (those who look for ways to hack software code, but not to exploit the flaws they find). It also sponsors an annual competition to find flaws called Pwn2Own; the 2014 event uncovered 33 in software from Adobe, Apple, Google, Microsoft and Mozilla. On top of this HPSR does its own research. In total in 2014 ZDI has uncovered over 500 bugs, two thirds of which have been patched, it estimate 50-75% of these were in software components. HPSR claims ZDI is the number one finder of bugs in deployed versions of Microsoft software.


As an HPSR rep points out 'these days most software is composed not written', meaning that software is largely built from pre-constructed components. In fact, not using components would be highly inefficient, as it would mean constantly re-inventing the wheel, especially when many components are cheap or free via open source. However, the number of bugs in software components means that users need more effective ways to monitor their use and fix problems that arise. This is especially true of open source components, as anyone can contribute to them. HPSR contends that commercial software vendors could strengthen the open source movement by investing more resources to ensure open source components are well-tested and secure.


Of course, the broader HP has an interest in all this for two reasons. First, as a builder and supplier of software, HP is a big user of components. Second, it also helps its customers build and deploy safer software through its Fortify product range. In February 2014 HP announced its Fortify Open Review Project to identify and report on security vulnerabilities in widely used open-source software components. HP also announced improved component checking support for its on-demand scanning service by partnering with Sonatype to use its Component Lifecycle Management analysis technology.


HP is not alone in recognising the need for safer component use. Veracode, another software security vendor, estimates that components constitute up to 90% of the code in some in-house developed applications. In September 2014 Veracode added a 'software composition analysis' into its static software scanning service to protect customers more rapidly from zero day vulnerabilities discovered in components.

With the introduction of software composition analysis Veracode can now create an inventory of all the components used by a given customer, detailing the programs in which each is embedded. When a new vulnerability is identified in a component, Veracode can take rapid and pervasive action; either applying fixes immediately or isolating already deployed applications until patches are available.


This further enhances its ability to protect customers from newly discovered vulnerabilities. Its dynamic scanning service, which tests deployed executables, would pick many of these up too. However, it focusses on common paths through applications and may miss obscure parts that are rarely or never used, but a hacker may focus exactly on these areas once a vulnerability becomes public knowledge.


As Veracode points out, most IT departments are managing software code that was largely not built in-house. The only control, security teams have over software is to maintain effective scanning capabilities with an awareness of components to help understand inherited risk. Software components are not going to disappear; their value to business is too great, security teams need to learn how to live with them.

Google Glass - seeing is believing

Bob Tarzey | No Comments
| More

I must admit to being sceptical about the whole 'wearables' thing. However, I was intrigued at recent Google event to be given an opportunity to try out a pair of Google Glass glasses. Glasses have been part of my life for as long as I can remember and here-in lay a problem. Google Glass assumes reasonable distance vision, so if you already wear glasses to correct for this, then the only way to try out Google's device proved to be wearing them on top of your normal specs. Still, it was only a demo, so style could be set aside!

The Google Glass equivalent of a screen is a translucent rectangle hanging in the upper right of your vision (think of walking down a street and reading a hanging pub sign). You might not want to read a book or watch a movie using such a display, but it was obvious it would be great for following directions or displaying information about museum exhibits or landscapes.

Apparently you can control the Google Glass menu by jolting your head, however, I did not master this. It conjures a future of people walking along the street making involuntary head movements (I suppose we have got used to the idea that people who are seemingly talking to themselves are no longer all mad, but usually using a Bluetooth mobile phone mic). You can also control Google Glass by swiping the arm of the glasses with your finger or by talking to them with certain prefaced commands.

So, if you have perfect 20/20 vision and are prepared to enter the bespectacled world to take advantage of Google Glass, what style choice do you have? You can choose from five different frames from the designer Net-a-Porter, which is not quite the range you might have in the local opticians, but it's a start. And, if you need your long term vision correcting, you can have prescription lenses fitted (the lenses are nothing to do with the device; indeed, you can wear them lens-free with just the frame).

In fact as the Google rep demoing the device pointed out, Google Glass is little more than a face mounted smartphone. So, when it comes to IT security the considerations are pretty much the same as for any personal device. Data can be stored and the internet accessed on Google Glass and therefore, in certain circumstances, their use may need to be controlled. You could argue that taking pictures or making videos would be more surreptitious with Google Glass than a standard smartphone, however, stylish as Google has tried to make its specs, it would still be pretty obvious you were wearing them, unless efforts have been made to conceal them with a hat or veil.

Privacy objections seem more likely. Google Glass and similar devices, that will surely follow if the form-factor takes off, may revolutionise certain job roles. Employees working in warehouses, hospitals or inspecting infrastructure in the field may really benefit from being able to see and record their activity whilst having both hands free. However, an employer with constant insight into what an employee is doing and seeing may be too much for some regulators. Time will tell.

Cloud & mobile security - take aim, save the data

Rob Bamforth | 1 Comment
| More

In all the hubbub around mobile users increasingly making their own choices of operating systems and hardware, something has been lost sight of - it doesn't really matter if you bring your own device (BYOD), a more pressing matter for businesses should be 'where is our data accessed?' (WODA).

This issue extends beyond the choice of the mobile endpoint as increasingly 'mobile' doesn't simply mean a single mobile touchscreen tablet alternative to a fixed desktop PC, but multiple points or modes of access with users flitting between them to use whichever is most appropriate (or to hand) at any moment in time. What has become mobile is the point of access to the business process, not just the hardware.

This multiplicity of points of mobile access - some corporate owned, some not - means that when IT services are required on the move they are often best delivered 'as a service' from the network, so it is no wonder that the growth in acceptance of cloud seems to have symbiotically mirrored the growth of mobile.

Both pose a similar challenge to the embattled IT manager. A significant element of control has been taken away - essentially the steady operating platform 'rug' has been pulled from under their feet.

So how do they retain some balance and control?

The first thing is to accept that things have changed. BYOD is more than a short-lived fad; most people have embraced their inner nerd and now have an opinion about what technology they like to use, and what they don't like. They buy it and use it as a fundamental part of their personal life from making social connections to paying utility bills. Most people are more productive if comfortable with familiar technology, so why force them to use something else?

However, enterprise data needs to be under enterprise control. Concerns about data are generally much higher than those surrounding applications and the devices themselves. This is a sensible, if accidental, prioritisation of how to deal with BYOD - first focus on corporate data. Unfortunately, few organisations have either a full document classification system or an approach to store mobile data in encrypted containers separated from the rest of the data and apps that will reside on BYO devices.

These are both worthy, if rarely reached at present, goals, but at least the first steps have been taken in recognising the problem. Organisations now need to understand their data a little better, and apply measured control of valuable data in the BYOD world - which doesn't look like diminishing any time soon.

In the core infrastructure, things have changed significantly too. Service provision has evolved from the convergence (or one could say, collision) of the IT industry with telecoms to deliver services on demand. IT might have been fragile with interoperability and resilience standards, but some of the positive side of telecoms has spilled over.   And eventually telecoms are starting to understand the power of supporting a portfolio of applications and that there is more to communications than voice. Cloud, or the delivery of elements of IT-as-a-service, is the active offspring of the coupling of IT and telecoms.

For businesses, struggling to do more IT with smaller budgets and fewer resources, the incremental outsourcing of some IT demands into the cloud makes sense.

However, cloud is still exhibiting some traits of the rebellious teenager. While there are some regions in Europe that appear more resistant to cloud (notably, Italy, Spain and to a lesser extent France), overall acceptance is positive, although this is across a mix of hybrid, private and public cloud approaches. There are also significant concerns about the location of data centres and the location of registration or ownership of cloud storage companies.

These are understandable in the light of recent revelations, but to enforce heavy security on all data 'just in case' would be excessive and counterproductive. Thankfully, most companies seem to realise this, and there is a pragmatic mix of opinions as to how to best store and secure data held in the cloud.

This needs to be an informed decision, however, and just as with mobile, all organisations need to be taking a more forensic approach to their digital assets. IT needs to work hand in hand with the business to identify those assets and data that are most precious, assess the vulnerability and apply appropriate controls, differentiated from other things that are neither valuable nor private as far as the organisation is concerned. The days of blanket approaches to data security are over.

For more information and recent research into cloud and mobile security, download this free Quocirca report, "Neither here nor there".

BMC - turnaround or more of the same?

Clive Longbottom | No Comments
| More

A little over a year ago, BMC was not looking good.  It had a portfolio of good, but tired technology and was failing to move with the times.  Internal problems at various levels in the company were leading to high levels of employee churn.  Things did not look good.

Led by CEO Bob Beauchamp, BMC was taken off the stock market and into private ownership. Investors were chosen based on their long term vision: what Beauchamp did not want was an approach of drive revenues and then cash in rapidly.

This has freed up BMC to take a new marketing approach.  New hires have been brought in.  The portfolio is being rationalised.  The focus is now on the user experience, with an understanding that mobility, hybrid private/public cloud systems and the business user are all important links in the new sales process. Substantially more money has been freed up to be invested in sales & marketing and research & development than was the case in its last year as a public company.

BMC's first new offering aimed to show an understanding of these issues was MyIT - an end-user self-service system that provides consumer-style front end systems with enterprise-grade back end capabilities.  MyIT has proved popular - and has galvanised BMC to take a similar approach across the rest of its product portfolio.

Help desk (or service desk as BMC prefers to call it) has been a mainstay of BMC over the years.  Its enterprise Remedy offering is the tool of choice in the Global 2000, but it was looking increasingly old-style in its over dependence on screens of text; was far too process-bound; and help desk agents and end users alike were beginning to question its overall efficacy in the light of new SaaS-based competition such as ServiceNow.  At its recent BMC Engage event in Orlando, BMC launched Remedy with Smart IT, a far more modern approach to service desk operation. Enabling better reach at the front end through mobile devices and better integration at the back end through to hybrid cloud services, Remedy with Smart IT offers a far more intuitive and usable experience than was previously available from BMC, available both as an on-premise and cloud-based offering.

BMC believes that it already has a strong counter-offer to ServiceNow in the mid-maturity market with its Remedyforce product (a service desk offering that runs on Salesforce's Salesforce1 cloud platform). The cloud-based version of Remedy with Smart IT, combined with MyIT will provide a much more complete offering with a better experience for users, service desk staff and IT alike across the total service desk market.

Workload automation is another major area for BMC.  Its Control-M suite of products has enabled automation of batch and other workloads from the mainframe through to distributed systems.  However, this has been a set of highly technical products requiring IT staff with technical and scripting skills.  Now, the aim is to enable greater usage by end users themselves, enabling business value to be more easily created.

All this is a journey for BMC - identifying and dealing with the needs of end users and how automation can help is something that is changing with the underlying platform.  For example, a hybrid platform requires more intelligence to identify where a workload should reside at any time (for example on private or public cloud), and the promise of cloud in breaking down monolithic applications to create the composite application built dynamically from required functions needs contextual knowledge of how the various functions can work together. 

This needs deep integration with BMC's products in its performance and availability group.  Being able to identify where problems are and dig down rapidly to root cause and remediate issues requires systems that can work with the service desk systems and with workload automation to ensure that business continuity is well managed.  Here BMC's TrueSight Operations Management provides probable cause analysis based on advanced pattern matching and analytics, enabling far more proactive approaches to be taken to running an IT environment.

TrueSight also offers further value in that it is moving from being an IT tool to a business one.  Through tying in the analytics capabilities of TrueSight into business processes and issues, dashboards can be created that show the direct business impact in cash terms for any existing or future problems, enabling the business to prioritise which issues should be focused on.

BMC has to work to deal with managing IT platforms both vertically at the stack level and horizontally at the hybrid cloud level.  It has taken a little time for BMC to move effectively from being a physical IT management systems vendor to a hybrid physical/virtual one; now, via its Cloud and Data Centre Automation team in BMC is positioning itself to provide systems to both end user and service provider organisations that are independent of any tie-in to hardware vendors, differentiating itself from the likes of IBM, HP and Dell (Dell is a long-term BMC partner anyway, although its acquisition of Quest and other management vendors has provided Dell with enough capability to go its own way should it so choose). At the same time, BMC still works closely with its data centre automation customers; it has recently published what it calls the Automation Passport, a best practices methodology for using automation to transform the business value of IT.

BMC still has a strong mainframe capability, which differentiates it from many of the new SaaS-based players.  Sure, not all organisations do have a mainframe, but the capability to manage the mainframe as a peer system within the overall IT platform means that those with one only have BMC, CA and IBM to look to for such an embracing management system.  IBM's strength is in its high-touch capacity of putting together a system once it is on the customer's site.  BMC and CA have both been moving towards simpler messaging and portfolios, along with providing on-premise and cloud based systems to give customers greater flexibility in how they deal with their IT platforms.

Overall, BMC seems to be turning itself around.  The lack of financially-driven quarterly targets has freed up Beauchamp and his team to take a far more strategic view of where the company needs to go.  Product sales volumes are up, and customer satisfaction is solid. However, BMC has to continue with a suitable speed along this new journey - and also has to ensure that it gets its message out there far more forcibly than it is doing at the moment.

Quocirca - security vendor to watch - Pwnie Express

Bob Tarzey | No Comments
| More

Branches are where the rubber still hits the road for many organisations; where retailers still do most of their selling, where much banking is still carried out and where health care is often dispensed. However, for IT managers, branches are outliers, where rogue activity is hard to curb; this means branches can become security and compliance black spots.


Branch employees may see fit to make their lives easier by informally adding to the local IT infrastructure, for example installing wireless access points purchased from the computer store next door. Whilst such activity could also happen at HQ, controls are likely to be more rigorous. What is needed is an ability to extend such controls to branches, monitoring network activity, scanning for security issues and detecting non-compliant activity before it has an impact.


A proposition from Boston, USA-based vendor Pwnie Express should improve branch network and security visibility. Founded in 2010, Pwnie Express has so far received $5.1 million Series-A venture capital financing from Fairhaven Capital and the Vermont Seed Capital Fund. The name is a play on both Pony Express, the 19th century US mail system and the Pwnie Awards, a competition run each year at the Black Hat conference to recognise the best discoverers of exploitable software bugs.


Pwnie Express's core offering is to monitor IT activity in branches through the installation of plug-and-play in-branch network sensor hardware. These enable branch-level vulnerability management, asset discovery and penetration testing. As such the sensors can also scan for wireless access points, which may have been installed by branch employees for convenience or even by a malicious outsider, and monitor the use of employee/visitor-owned personal devices.


To date Pwnie monitoring has been on a one-to-one basis and so hard to scale. That has changed with the release of a new software-as-a-service (SaaS) based management platform called Pwn Pulse. This increases the number of locations that can be covered from a single console, allowing HQ-based IT management teams to extend full security testing to branches. Pwn Pulse also improves backend integration to other security management tools and security information and event management (SIEM) systems improving an organisation's overall understanding its IT security and compliance issues.


Currently 25 percent of Pwnie Express's sales are via an expanding European reseller network, mainly in the UK. With data protection laws only likely to tighten in Europe in the coming years, Pwnie Express should provide visibility into the remote locations other security tools simply cannot reach.

Do I Concur with the SAP deal?

Clive Longbottom | No Comments
| More

SAP's recent $8.3b deal to acquire on-line travel and expense management vendor Concur can be read a few ways.  The first, and most positive one, is that it shows that SAP is continuing to try and broaden its appeal, diversifying from being "the ERP company".

Another view is that SAP has had a few bites at the cloud cherry and mostly failed.  Concur brings a massive cloud infrastructure with it, and SAP can make use of this in other ways.

A third, less charitable view is just that SAP has a large amount of money that it needs to be seen to do something with - and Concur was around at the right time and place.

Which one is most likely?  I would plump for diversification, with a bit of cloud thrown in.  SAP acquired SaaS-based human capital management vendor, SuccessFactors, in 2012.  It can be argued that Concur fits quite nicely into this vein - both are SaaS; both deal with managing employees.  This takes SAP from being the ERP solution for a few to a provider of functions for everyone; becoming a far stickier and embedded supplier that is even harder for an organisation to extricate itself from.

However, such a simplistic view hides many problems that could now face SAP as it integrates Concur.  Travel and expense management is complexity that only a few software vendors have managed to deal with.  It is not a simple replacement for employees using Excel spreadsheets to log their expenses - but requires deep domain expertise in areas such as multi-national tax laws, per diem rules, how travel management companies (TMCs) operate, how to interact with financial institutions on a broad scale to manage company and personal credit cards in a secure and effective manner and so on. Concur understands this in spades - but what impact will SAP have on this?

Sure, SAP understand the first part of this: ERP has had to deal with multi-national currencies and tax laws for some time.  The rest, though, is new territory for SAP.

Not only are the basics of expense management a difficult area, but Concur has been pushing the boundaries of what it does.  In the US, it has various deals for example with integrated taxi cab expense management, where the employee uses their mobile phones to identify a nearby cab and hail it electronically, and then pay the cab driver via the phone with the expense directly integrated into expense claims.  Other ongoing work has been looking at how travellers can have their whole trip automated from booking through travel and stay with capabilities such as the use of near field communication (NFC) as a means of booking into hotels without a need to go to the check in desk, and for mobile phones to act as electronic keys to unlock the hotel room door.  Such work requires a certain mindset and understanding of the travel and entertainment expense world - and the investment of large amounts of money.

Also, with Concur's 2011 acquisition of travel details management vendor TripIt, SAP finds itself with a more consumer-oriented product: taking it well out of its comfort zone.

It leaves SAP with a couple of choices - the first is to pretty much leave Concur as a separate entity, trying to keep all its existing staff and domain expertise to continue focusing on what Concur has been calling "the perfect trip" experience.  SAP can provide Concur with the deeper pockets to continue work in achieving the perfect trip - but is SAP up to understanding this and achieving any pay back on such investment?

For customers, they now find themselves with the unfortunate impact of moving from dealing with a small but fleet of foot and interesting supplier, to a rather staid and enterprise-focused behemoth.  I believe that this will raise flags for many customers: those who have been dealing with Concur in the past (travel and expense management professionals) are unlikely to be the ones in a company who have been dealing with SAP, and many companies will have ruled out SAP for other functions such as ERP and CRM and have gone for others, such as Oracle or Microsoft. Dealing with SAP may then be seen as the thin end of the wedge, with rapacious SAP salespeople trying to usurp the incumbent ERP and CRM vendors.

As with most acquisitions, the SAP/Concur deal will raise worries in may existing customers' minds, and will open up opportunities for Concur's competitors.  As stated earlier, the market is not exactly flush with such companies that understand travel and expense management well and have software that addresses all requirements.  For companies such as KDS and Infor, the SAP/Concur deal must be seen as opening up opportunities.

For Concur's existing customers, I would advise caution.  The two companies' view of the world are not the same - watch to see how SAP manages the acquisition; watch how many staff start to move on from Concur to join its competitors.  If it becomes apparent that SAP is trying to force Concur to fit into the SAP mould, maybe it will be time to look elsewhere.

Ricoh's plans for transformation

Louella Fernandes | No Comments
| More

Ricoh recently held its first industry analyst summit in Tokyo. The event focused on communicating Ricoh's focus on its services-led business transformation through its 18th Mid-Term Plan.

Ricoh is in the midst of transformation, actively streamlining its company structure to accelerate growth across a number of markets. Like many traditional print hardware companies, it is shifting its focus to services. Its primary focus is on what it calls "workstyle innovation". Over the past few years, Ricoh has repositioned the company as a services-led organisation - and has greatly enhanced its marketing communications and web presence to shift perception of Ricoh as a company that can support a business' transformation in today's evolving and mobile workplace. Ricoh's services target is to gain 30% growth in revenue globally in 3 years. It plans to achieve this by enhancing its core business as well as expanding its presence in new markets.

Core business enhancement

Ricoh's core business revolves around office printing, where it has carved out a strong strategy around managed document services (MDS). This established approach has enabled enterprises to tackle the escalating costs associated with an unmanaged print infrastructure. Ricoh has extended this model to encompass all document-centric processes and is effectively increasing its presence in the market on a global basis. In Quocirca's recent review of the MPS landscape, it is positioned as a global market leader - testament to its global scale, unified service and delivery infrastructure and effective approach to business process automation.

Service expansion

Ricoh's 18th mid-term plan relates to five key business areas. Its primary business, the office business market, encompasses both hardware technology and services such as MDS, business process services (BPS), IT services and Visual Communication.  Ricoh also operates in the consumer market (as seen in its new THETA 360 camera, a range of projectors and an electronic white board product); the industrial business market (optic devices, thermal media and inkjet heads), commercial printing (production printers) and new business, which includes additive manufacturing.  Ricoh plans a full-scale entry into commercial printing and intends to expand its growth in the industrial market by 50% in the next three years.

Ricoh announced eight new service lines

  • Managed Document Services - leveraging Ricoh's 5 step adaptive model to help organisations optimise document-centric processes.
  • Production Printing Services - portfolio of integrated services to complement Ricoh's hardware and solution portfolio for in-house corporate printing or graphic arts and commercial Printing.
  • Business Process Services - streamlining business processes such as human resources, finance and accounting, and front office outsourcing services such as contact center services.
  • Application Services - Integration of applications such as insurance claims processing services
  • Sustainability Management Services - Services to reduce environmental impact such as electricity and paper for Ricoh and non-Ricoh devices.
  • Communication Services - Development, deployment and integration of unified communication solutions including communication/collaboration solutions (such as Video Conferencing, Interactive White Board, Digital Signage, Virtual Help Desk)
  • Workplace Services - Services to maximise efficiency of workplace and effectiveness of workforce, including optimised use of space, smart use of technology and automation of certain office functions.
  • IT Infrastructure Services - Consulting, designing, supplying and implementation of IT infrastructure as well as support and management of full IT Infrastructure by remote and on-site support.

Perhaps the most focus was given to Ricoh's IT services portfolio which varies by region. Ricoh has made a number of IT services acquisitions across several regions and is seeing strong success in Asia Pacific, Europe and the US.   In The US, the acquisition of MindSHIFT is enabling Ricoh to target small and medium sized businesses. If Ricoh can articulate a strong proposition around IT services, this could be a key differentiator to its traditional competitors over the coming year. However, Ricoh is now operating in a wider IT services market and perhaps its penetration will be limited to its existing customer base looking to extend existing MDS engagements to the IT infrastructure.


Ricoh is working on a range of technologies around what it calls the infinite network (TIN) where all people and things will be connected all the time. This is Ricoh's view of the internet of things (IoT) and also embraces Ricoh's vision of the need to connect to a rapidly increasing set of sensors in the environment.

Ricoh R&D discussed a range of differentiated technology platforms which aim to address multiple markets, enabling the business units and operating companies to go to market with highly differentiated solutions for the office and for specific large verticals. This includes communication and collaboration, visual search and recognition, digital signage and hetero-integration photonics (optics and image processing).

Perhaps the most relevant to the print industry is its mobile visual search technology which provides an interactive dimension to the printed page. A simple snap of an image can provide access to digital content such as text, video, purchase options and social networks.  Ricoh has commercialised this through its Clickable Paper product. Based on digital layers, this enables consumers to hover their mobile phone over a magazine advert, for example, and it could generate video or a link to a web site. Ricoh demonstrated an example used by Mazda, which is using the technology in its brochures.

This technology promises to potentially breathe new life into print by connecting print to the digital world.  The market is rapidly evolving market and Ricoh is competing with a range of interactive print/ augmented reality vendors in this space. The only other printer vendor to offer something similar is HP, with its Aurasma technology, which has been available for a number of years.

Quocirca opinion

Ricoh, like its traditional print competitors, needs to drive a dramatic shift to a services business model - its long-term relevance depends on this. While Ricoh has developed a cohesive set of new service offerings, it already has developed a relatively mature set of business process services across areas such as e-invoicing, healthcare, loan applications and so on.  Quocirca believes that this should be a priority for Ricoh going forward with its services strategy.

Indeed, Ricoh has already made strong inroads with its MDS strategy. To drive deeper engagements with larger enterprises needs to further articulate a strong vision around business process automation. Ricoh faces strong competition from Lexmark and Xerox in this space.

Ricoh illustrated that it is innovating across a number of markets and this shows commitment to expanding its presence in non-core markets. Overall, Ricoh is taking the right direction to change perceptions of its brand and develop broader services capabilities.  Ricoh certainly has a broad array of services, but it is now competing in many new markets and should focus on building its credibility in a few core areas and partnering with best of breed providers in others.

Some of the less conventional products, such as Clickable Page, need to be positioned carefully, and Ricoh will need to either ensure that it moves with improvements in the technology and with the increasing use of wearable technology, and even fully understand when such ephemeral approaches have run their time and so pull out of providing any offerings in the space.

Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.


Recent Comments

  • Adam: Cloud computing and BYOD go hand-in-hand. Cloud computing can make read more
  • David Chassels: Hi Clive Is the business emphasis not wrong in looking read more
  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more




-- Advertisement --