Managing a PC estate

| No Comments
| More

Although there is much talk of a move towards virtual desktops, served as images from a centralised point, for many organisations, the idea does not appeal.  Whatever the reason (and there may be many as a previous blog here points out), staying with PCs leaves the IT department with a headache - not least an estate of decentralised PCs that need managing.

Such technical management tends to be the focus for IT; however, for the business, there are a number of other issues that also need to be considered.  Each PC has its own set of applications.  The majority of these should have been purchased and installed through the business, but many may have been installed directly by the users themselves, something you may want to avoid, but is nowadays an expectation of many IT users.

This can lead to problems: some applications may not be licenced properly (for example, a student licence not permitted for use in a commercial environment); it may contain embedded malware (a recent survey has shown that much pirated software contains harmful payloads including keyloggers); it definitely opens up an organisation to considerable fines should a unlicensed software be present and a software audit be carried out by an external body.

Locking down desktops is increasingly difficult. Employees are getting very used to self-service through their use of their own devices, and expect this within a corporate environment.  Centralised control of desktops is still required - even if virtual desktops are not going to be the solution of choice.

The first action your organisation should take is for a full audit.  You need to fully understand how many PCs there are out there; what software is installed and whether that software is being used or not.  You need to know how many licences for software you have in place and how those can be utilised - for example, are they concurrent licences (a fixed number of people can use them at the same time), or named seat licences (only people with specific identities can use them).

This will help to identify software that your organisation was not aware of, and can also help in identifying unused software sitting idle on PCs.

You can then look at creating an image that contains a copy of all the software that is being used by people to run the business.  Obviously, you do not want every user within your organisation to have access to every application, so something is needed to ensure that each person can be tied in by role or name to a list of software to which they should have access.

Through the installation of an agent on each PC, it should then be possible to apply centralised control over what is happening.  That single golden image containing all allowable applications can then be called upon by that agent as required.  The user gets to see all the applications that they are allowed to access (by role and/or individual policy), and a virtual registry can be created for their desktop.  Should anything happen to that desktop (machine failure, disk corruption, whatever), a new environment can be rapidly built against a new machine.

If needed, virtualisation can be used to hive off a portion of the machine for such a corporate desktop - the user can then install any applications that they want to within the rest of the device.  Rules can be applied to prevent data crossing the divide between the two areas, keeping a split between the consumer and corporate aspects of the device - a great way of enabling laptop-based bring your own device (BYOD).

As with most IT, the "death" of any technology will be widely reported and overdone: VDI does not replace desktop computing for many.  However, centralised control should still be considered - it can make management of an IT estate - and the information across that estate - a lot easier.

This blog first appeared on FSlogix' site at http://blog.fslogix.com/managing-a-pc-estate 

Enhanced by Zemanta

Web security 3.0 - is your business ready?

Bob Tarzey | No Comments
| More

As the web has evolved so have the security products and services that control our use of it. In the early days of the "static web" it was enough to tell us which URLs to avoid because the content was undesirable (porn etc.) As the web became a means distributing malware and perpetrating fraud, there was a need to identify bad URLs that appeared overnight or good URLs that had gone bad as existing sites were compromised. Early innovators in this area included Websense (now a sizable broad-base security vendor) and two British companies SurfControl (that ended up as part of Websense) and ScanSafe that was acquired by Cisco.

 

Web 2.0

These URL filtering products are still widely used to control user behaviour (for example, you can only use Facebook at lunch time) as well as block dangerous and unsavoury sites. They rely on up to date intelligence about all the URLs out there and their status. Most of the big security vendors have capability in this area now. However, as the web became more interactive (for a while we all called this Web 2.0) there was a growing need to be able to monitor the sort of applications that were being accessed via the network ports typically used for web access; port 80 (for HTTP) and port 443 (for HTTPS). Again this was about controlling user behaviour and blocking malicious code and activity.

 

To achieve this firewalls had to change; enter the next generation firewall. The early leader in this space was Palo Alto Networks. The main difference with its firewall was that it was application aware with a granularity that could work within a specific web site (for example, applications running on Facebook). Just as with the URL filtering vendors, next generation firewalls rely on application intelligence, the ability to recognise a given application by its network activity and allow or block it according to user type, policy etc. Palo Alto Networks built up its own application intelligence, but there were other databases, such as FaceTime (a vendor that found itself in a name dispute with Apple) which was acquired by Check Point as it upgraded its firewalls. Other vendors including Cisco's Sourcefire, Fortinet and Dell's SonicWALL have followed suit.

 

The rise of shadow IT

So with URLs and web applications under control, is the web is a safer place? Well yes, but the job is never done. A whole new problem has emerged in recent years with the increasing ability for users to upload content to the web. The problem has become acute as users increasingly provision cloud services over the web for themselves (so called shadow IT). How do you know which services are OK to use? How do you even know which ones are in use? Again this is down to intelligence gathering, a task embarked on by Skyhigh Networks in 2012.

 

Skyhigh defines a cloud service as anything that has the potential to "exfiltrate data"; so this would include Dropbox and Facebook, but not the web sites of organisations such as CNN and the BBC. Skyhigh provides protection for businesses, blocking its users from accessing certain cloud services based on its own classification (good, medium, bad) providing a "Cloud Trust" mark (similar to what Symantec's Verisign does for websites in general). As with URL filtering and next generation firewalls, this is just information, rules about usage need to be applied. Indeed, Skyhigh can provide scripts to be applied to firewalls to enforce rules around the use of cloud services.

 

However, Skyhigh cites other interesting use cases. Many cloud services of are of increasing importance to businesses; LinkedIn is used to manage sales contacts, Dropbox, Box and many other sites are used to keep backups of documents created by users on the move. Skyhigh gives businesses insight into their use, enables it to impose standards and, where subscriptions are involved, allows usage to be aggregated into to single discounted contracts rather than being paid for via expenses (which is often a cost control problem with shadow IT). It also provides enterprise risk scores for a given business based on its overall use of cloud services.

 

Beyond this, Skyhigh can assert controls over those users working beyond the corporate firewall, often on their own devices. For certain cloud services for which access is provided by the business (think salesforce.com, ServiceNow, SuccessFactors etc.), without need for an agent, usage is forced back via Skyhigh's reverse proxy so that usage is monitored and controls enforced. Skyhigh can also recognise anomalous behaviour with regard to cloud services and thus provide an additional layer of security against malware and malicious activity.

 

Skyhigh is the first to point out that it is not an alternative to web filtering and next generation firewalls but complimentary to them. Skyhigh, which mostly provides it service on-demand, is already starting to co-operate with existing vendors to enhance their own products and services through partnerships. So your organisation may be able to benefit from its capabilities via an incremental upgrade from an existing supplier rather a whole new engagement. So, that is web security 3.0; the trick is to work out what's next - roll on Web 4.0!

 

Enhanced by Zemanta

Two areas where businesses can learn from IT

Bob Tarzey | No Comments
| More

Many IT industry commentators (not least Quocirca) constantly hassle IT managers to align their activities more closely with those of the businesses they serve; to make sure actual requirements are being met. However, that does not mean that lines of business can stand aloof from IT and learn nothing from the way their IT departments manage their own increasingly complex activities. Two recent examples Quocirca has come across demonstrate this.

 

Everyone needs version control

First, take the tricky problem of software code version control. Many outside of IT will be familiar with the problem, at least at a high level, through the writing and review of documents. For many this is a manual process carried out at the document name level; V1, V1.1, V1.1A, V2.01x etc. Content management (CM) systems, such as EMC's Documentum and Microsoft's SharePoint can improve things a lot, automating versioning, providing checking and checkout etc. (but they can be expensive to implement across the business).

 

With software development the problem is a whole lot worse, the granularity of controls needs to be down to individual lines of code and there are multiple types of entities involved; the code itself, usually multiple files linked together by build scripts (another document), binary files that are actually deployed in test and then live environments, documentation (user guides etc.), third party/open source code that is included and so on. As result the version control systems from vendors such as Serena, IBM Rational and a number of open source systems that have been developed over the years to support software development are very sophisticated.

 

In fairly technical companies, where software development is a core activity, the capability of these systems is so useful that it has spread well beyond the software developers themselves. Perforce Software, another well-known name in software version control, estimates that 68% of its customers are storing non-software assets in its version control system. It customers include some impressive names with lots of users, for example; salesforce.com, NYSE, Netflix and Samsung.

 

To capitalise on this increasing tendency of its customers to store non-IT assets Perforce has re-badged its system as Perforce Commons and made it available as an online service as well as being available for on-premise deployment. All the functionality developed can be used for the management of whole range of other business assets. With the latest release this now includes merging Microsoft PowerPoint and Word documents and checking for difference between various versions of the same document. Commons also keeps a full audit trail of document changes, which is important for compliance in many document-based work flows.

 

Turing up the Heat in ITSM

The second area where Quocirca has seen IT management tools being used beyond the business is in IT service management (ITSM). FrontRange's Heat tool is traditionally used for handing support incidents related to IT assets raised by users (PCs, smartphones, software tools etc.) However, increasingly its use is being extended beyond IT to other departments, for example to manage incidents relating to customer service calls, human resources (HR) issues, facilities management (FM) and finance department requests. Heat is also available as an on-demand service as well as an on-premise tool, in many cases deployments are a hybrid of the two.

 

Of course, there are specialist tools for CM, HR, FM and so on; specially designed for the job with loads of functionality. However, with budgets and resources stretched, IT departments that already use tools such as Perforce version management and Heat ITSM can quickly add value to whole new areas of the business with little extra cost. Others, that are not already customers, may be able to kill several birds with one stone as they seek to show the business that IT can deliver extra beyond its own interests with little incremental cost.

 

Enhanced by Zemanta

4.7 Million NTP Servers Ready To Boost DRDoS Attack Volumes

Bernt Ostergaard | No Comments
| More

The US Internet securty organisation CERT has published a warning of increasing DRDoS (Distributed Reflection and amplification DDoS) attacks using Internet Service Providers' NTP (Network Time Protocol) servers (http://www.kb.cert.org/vuls/id/348126). According to their analysis NTP is the second most widely used vehicle for DDoS attacks (after DNS). In plain language that means, if I want to take a victim web site down, I can send a spoofed message to a vulnerable ISP NTP server and get it to send a response that is several thousand times longer to my intended victim! That is amplification in action.

A request could look like this:

ntpq -c rv [ip]

The payload is 12 bytes which is the smallest payload that will illicit a mode 6 response. The response from a delinquent ISP could be this:

associd=0 status=06f4 leap_none, sync_ntp, 15 events, freq_mode, version="ntpd 4.2.2p1@1.1570-o Tue Dec  3 11:32:13 UTC 2013 (1)", processor="x86_64", system="Linux/2.6.18-371.6.1.el5", leap=00,stratum=2, precision=-20, rootdelay=0.211, rootdispersion=133.057, peer=39747, refid=xxx.xxx.xxx.xxx, reftime=d6e093df.b073026f  Fri, Mar 28 2014 19:35:43.689, poll=10, clock=d6e09aff.0bc37dfd  Fri, Mar 28 2014 20:06:07.045, state=4, offset=-17.031, frequency=-0.571, jitter=5.223, noise=19.409, stability=0.013, tai=0

 

That is only a 34 times amplification. Crafting the request differently could boost attack volumes by up to 5500 times according to CERT. The response shows that this ISP last updated its NTP software in Dec 2013 to version 4.2.2p1. The CERT recommendation is that NTP server should at least be using version 4.2.7p26.

So, how widespread is the problem? The Shadowserver Foundation performs on-going monitoring of the problem - and at present, it has discovered 4.7 million vulnerable NTP servers across the globe (https://ntpscan.shadowserver.org). So this ISP is part of a very large delinquent group of ISPs. The Shadowserver monitoring activity also clearly shows that the problem is most severe in the US followed by Europe, Japan, South Korea, Russia and China in that order.

Screen Shot 2014-04-02 at 13.34.14.png

The global map from Shadowserver.org shows the distribution of vulnerable NTP servers - yellow shows the highest density.

Responsibility for Internet safety is clearly a shared responsibility involving all user groups, which means that we as users need to keep our service providers on their toes, and Shadowserver enshrines this principle. It is a volunteer group of professional Internet security workers that gathers, tracks and reports on malware, botnet activity and electronic fraud. It aims to improve the security of the Internet by raising awareness of the presence of compromised servers, malicious attackers and the spread of malware. In this respect, I would like to 'amplify' their message.


The Top 3 Barriers to VDI

| No Comments
| More

The use of server-based desktops, often referred to as a virtual desktop infrastructure (VDI), makes increasing sense for many organisations.  Enabling greater control over how a desktop system is put together; centralising management and control of the desktops as well as the data created by the systems; helping organisations to embrace bring your own device (BYOD) and enhancing security are just some of the reasons why more organisations are moving toward the adoption of VDI.

However, in Quocirca's view, there remain some major issues in VDI adoption.  Our "Top 3" are detailed here:

  •  Management.  Imagine that you are tasked with managing 1,000 desktops.  Your OS vendor pushes out a new security patch.  You have to patch 1,000 desktops.  With VDI, at least you do not have to physically visit 1,000 desks, right?  Maybe so - but it is still an issue, and with application updates coming thick and fast and the possibility that one single patch could cause problems with some proportion of the VDI estate puts many IT departments off such updates for fear of causing problems, so leading to sub-optimised desktops and possible security issues.
  • Licencing.  The phantom of better control when the desktops are all in one place in the datacentre can soon become less believable.  Unless solid controls and capable management tools are in place, the number of orphan (unused but live) images can rapidly get out of control.  Desktops belonging to people who have left the company do not get deleted; test images get spun up and forgotten about; copies of images get made and forgotten about.  Each of these images - as well as using up valuable resource - needs to be licenced.  Each requires an operating system licence along with all the application licences that are live within that hot image, even though it is not being used. Many organisations go for a costly site licence to avoid this issue rather than attempting to deal with it.
  • Storage costs.  The move from local OS and application storage on the desktop PC to the data centre can be expensive.  Old-style enterprise storage, such as a SAN or dedicated high-performance storage arrays, has high capital and maintenance costs. A more optimised use of, for example, newer virtualised direct attached storage, virtual storage areas networks or software-defined storage (SDS) approaches using underlying cheaper storage and compute arrays from vendors such as Coraid or Nutanix can provide the desired performance while keeping costs under control.

So, does this mean that VDI is more trouble than it is worth?  Not if it is approached in the right way.  The use of "golden images", where as few a number as possible of images are held, may hold the key.

Many VDI vendors that push this approach will start off with maybe four or five main golden images - one for task workers, one for knowledge workers, one for special cases and one for executives, say - but will then still face the problems of having these spin up and staying live on a per-user basis.  Managing the images still requires either patching all live images, or patching the golden images and forcing everyone to refresh their desktops by powering them down and back up again - not much easier than with physical desktops.  Dealing with leavers still needs physical processes to be in place, otherwise licencing still becomes an issue.

A better approach is to use a single golden image that can be used to automatically provision desktops on a per-user basis and automatically manages how software is made available and managed on an on-going basis.  This requires an all-embracing golden image: it needs a copy of every application that will be used within the organisation present in the image - and it needs a special means of dealing with how these applications are provisioned or not as the case may be, to manage the licensing of each desktop. By virtualising the desktop registry and linking this through to role and individual policies in Active Directory, this can be done: as each user utilises their own desktop, it can dynamically be managed through a knowledge of what the virtual registry holds and what rights they have as a user.

The data held in the virtual registry also enables closer monitoring and auditing of usage to be made: by looking at usage statistics, orphan images can be rapidly identified and closed down as required.  Unused licences can be harvested and put back into the pool for others to use - or can be canned completely and used to lower licencing costs with the vendor.

VDI is not the answer for everyone, but it is an answer that meets many organisations' needs in a less structured world.  If you have been put off VDI in the past for any of the reasons discussed above, then maybe it is time to reconsider.  Today's offerings in the VDI market are considerably different to what was there only a couple of years back.

This item was first posted on the FSLogix blog at http://blog.fslogix.com/ 

Enhanced by Zemanta






6 Tips for Managed Print Services Success

Louella Fernandes | No Comments
| More

Many enterprises are turning to a managed print service (MPS) to minimise the headache of managing an often complex and costly print environment. Through device consolidation, optimisation of hardware and software and continuous monitoring, MPS is helping to reduce costs - both financial and environmental, lower the IT print management burden and improve user productivity.

MPS is extending beyond the enterprise print environment to address requirements of mobile and remote workers, as well as encompass IT infrastructure and business process automation needs. Whilst some enterprises may be at the early stages of their MPS journey, many are now entering their second or third generation MPS contracts.

Although cost control remains a top priority, enterprises are also looking to drive wider productivity and business process improvements.  Consequently enterprises are looking for next generation MPS providers to become true innovation partners with industry-specific business insight and services that will deliver new cost savings.

Below are some recommendations on how to maximise the benefit from MPS, and ensure it can help drive greater business value and sustained long term performance.

      1.       Think big, start small

MPS engagements vary widely in scope depending on business needs. New opportunities exist to not only extend the scope of MPS engagements to encompass all aspects of enterprise printing (office, mobile, production and commercial), but also improve performance by outsourcing higher-value services such as IT operations and business processes. Consider how well the scope of services matches your business needs? Can you start with a limited engagement and add services as business requirements evolve and/or you relationship with your MPS provider beds in.

2.       Conduct a full evaluation of the print infrastructure

A detailed assessment is the foundation of an effective MPS engagement and should take a holistic view of all print-related processes. Things to check include: is an established methodology used? What scalability is offered in terms of depth and cost of assessments? At the minimum, this should include a full document analysis that analyses print usage across the enterprise. Additional assessment services to consider include environmental and document security. Some vendors also offer document workflow assessment services, which identify potential for business process improvements. A comprehensive assessment will ensure the greatest opportunities for cost savings and productivity improvements over the long term of a contract.

 

                3.    Evaluate the flexibility to add new services

As a business continually adapts to the marketplace, MPS agreements should be adaptable as well - in terms of the commercial offering, contract arrangements, staffing and delivery location etc. When agreeing on the service offering, negotiate for the flexibility to incorporate new capabilities. For example, next generation MPS may look to take advantage of evolving technologies such cloud, mobility, business intelligence and ITIL-based process methodologies to ensure that business objectives continue to be met throughout the duration of the contract. 

 

4.                  4.    Leverage MFP sophistication

Multifunction peripherals (MFPs) are often underutilised in the office environment, yet have powerful document workflow capabilities that can be integrated with key business processes such as HR, legal, and finance and accounting. Leading MPSs allow seamless integration of MFPs, either via the cloud or on-premise, with vertical applications, optimising paper workflow and improving productivity. 

 

5.                5.    Ensure mature service level quality

SLAs are critical to the success of any MPS engagement. SLAs have to be flexible, and the MPS provider must use analytics to be able to advise on past performance and future requirements - and to offer a range of different approaches based against the customer's own risk profile, balancing risk, cost and perceptions of added business value. Are service levels matched to your business needs (hours of services, problem resolution times, end-user productivity)? How does the provider handle service events in multivendor environment? Is a pre-emptive service used to reduce response times and solve device problems? Is onsite or off-site support available?

6.       Continuous improvement.

Monitoring and ongoing management is critical to ensure that the MPS adapts to changing business needs. This requires governance throughout the contract which should place high emphasis on service analytics, reporting and communication. A governance programme allows the parties to evaluate address and resolve service issues as and when they arise.


Read Quocirca's report on The Next Frontier for MPS

Enhanced by Zemanta

TV, phone, tablet, watch - are we smart yet?

Rob Bamforth | No Comments
| More

At one time things were pretty straightforward; entertainment centred on a TV, computing around a PC, communication around a phone. Now, convergence has blurred so many lines; it should be no surprise that everything looks like a cloud.


The biggest changes have been in IT. Not long ago, the desktop computer was at the centre of IT. Everything else was a peripheral; not only the obvious things like printers and scanners, but even mobile devices such as handheld PDAs, e.g. PalmPilots were described as 'companions'.


As the number of computing devices and their form factors has multiplied, every so often it is suggested there might be a single unifying device that does everything. This has a lot of appeal, just like owning a Swiss Army knife, but equally, how many Swiss Army knife owners carry them about their person every day?


The all-embracing single device that does everything never quite seems to arrive, as it is too much of a compromise. So most people end up carrying a subset of a collection of devices or personal device ecosystem - laptop, camera, phone, tablet etc. - the mix of which is determined by needs of the day and the constraints or capabilities of wardrobe and associated 'baggage'. This is important as although many people like to think they are now very technology aware, most are not only fashion or fad conscious, but also hate lugging around too much stuff.


Paring down to the essentials indicates what is most important - often a single device is the one that no one can do without and the others are regarded as peripheral.

 

So where now is the centre of techno-attention?  It is rarely the desktop, and for many it is the mobile phone, combined with a drift away from the previous favourite desktop replacement on the move, the laptop, towards the tablet.


This is not without its challenges; for many the typewriter keyboard is a hard habit to kick, but smaller and lighter means more portable and touch screens have created for many a more natural way of interaction. The keyboard, though is still apparently vital for 'real content creation', but generally large and clunky, yet how many will admit to learning how to use one properly through completing courses such as touch typing rather than relying on the two-fingered jab?


Once the need for a keyboard has been ignored, screen size becomes the next physical factor to align around. But what is the right size for a screen? Apple laboured long and hard and Steve Jobs had been adamant that there was a 'Goldilocks' screen size (just right). Despite this thought, the industry seems to disagree and screen sizes for all sorts of devices have expanded into a huge multiplicity of options - most of them too big to hold next to an ear.


So, is the tablet then the new centre of attention?


Possibly for some, and given how they are so casually used in formal as well as informal and relaxed circumstances, the tablet does now seem to have a very high degree of importance, but the centre? No, the majority of tablets still have Wi-Fi only mobile connectivity as they are often used flexibly but within a 'place', and are then paired with another device for connectivity on the move - a mobile phone - making that still the prime device.


Two other areas have started to become much smarter and their effect on the centre of gravity of attention is intriguing, but not decisive. The more established is smart TVs, which are starting to deliver on the some of the promises of WebTV and the convergence of PCs and TVs much trialed and hyped in the early days of the internet in the 1990s.


However, these devices have fundamental flaws. Sometimes it is poor execution of software by companies who are (let's face it) unused to the rapid rate of revision of software. Somehow, upgrading a TV just to get the latest version of some player app, which isn't supported on the current box, isn't going to be a priority for most people.


The other flaw is in the usage model where it is no longer clear that the TV is the centre of digital attention even when people are sat around possibly watching it. The behaviours known as 'meshing' and 'stacking' describe how may behave. Some will 'mesh', in that the other devices in their hands - mobile phone, tablet etc. - work in conjunction with the broadcast content eg voting on reality TV programmes or tweeting along to political knockabouts. However, according to Ofcom, more will 'stack' their digital activities, in that they are doing other things with those devices that are unrelated to the broadcast TV content. The TV looks destined to be a peripheral, smart or not.


The next topic of growing interest is wearable technology and a particular area of wearable real estate, the wrist. Several companies are throwing a lot of effort into this space for essentially a companion device to a mobile phone, but is this misplaced?


It depends on the function of the wrist device, which seems to fall into two camps; one is the smart watch - a remote control or ancillary screen for the mobile phone; the other is more of a data capture device often for health and fitness. These devices tend to be wristbands rather than watches, although some have rudimentary displays. Their purpose is to gather data and feed it, typically via a mobile phone or when docked via a fixed device, to the cloud. This might mean they are not 'peripherals', but neither do they offer full functional mobile communications for the wearer.


In the other case, the smart watch offers a lot of the functionality previously delivered by the Bluetooth headset, but shedding the now less desirable appearance. To this it adds the 'Dick Tracey' geek appeal of a digital watch, but essentially is still a subservient peripheral to the prime communications device, the mobile phone.


These devices beg the question 'why would I use one if it is easy to get out the phone that I'm already carrying?' Here might lie the answer for certain device ecosystems where the phones are becoming so large they are usurping tablets - the 'phablet' fanciers - and so remain stowed away with urgent functionality accessed from the wrist. This is the model that is being tried by several of the Android device makers such as Samsung, but not yet by Apple.


Perhaps a better approach might be to shrink the cellular 'phone' to a wristband and carry a small or large display that can connect to it depending on circumstances, or simply wear the display as a head up image on glasses? Google's first iteration of Glass might be geeky and pose some sartorial challenges today, but once competition and the fashion industry takes over, who knows?

Enhanced by Zemanta

Planning your next data centre

| No Comments
| More

Virtualisation, increased equipment densities, cloud computing, all are conspiring to make life harder for any IT manager looking at where to go when it comes to the next data centre facility.  After a prolonged period of economic doldrums, many data centres are beginning to show their age.  Cooling systems are struggling; uninterruptable power supplies (UPSs) can only support a proportion of the IT equipment should the power fail; disaster recovery plans are no longer fit for purpose.  For many organisations it is time for an urgent review - something has to be done; but what?

Will it be possible to just move things around a bit in an existing facility?  Doubtful - power distribution and cooling systems will need to be completely changed to meet the requirements of today's highly dense equipment.  How about a new build?  OK - but should this be planned for ongoing expansion, or should it be for further contraction as some functionality moves from existing infrastructure out to the public cloud?

A direction that more companies are taking is a move to a co-location facility.  These facilities are built and managed by an external company, but the IT equipment and how it runs remains your responsibility.  Space can generally be rented flexibly: it can grow or shrink as needed.  The facility owner has the responsibility for keeping all the peripheral systems up to date: power distribution; uninterruptable power supplies and auxiliary generating; cooling and connectivity provision to and from the data centre.

However, like most things out there, choosing a co-location provider is not just a case of going to a web listing and choosing the first name that sticks out.  There are a lot of cowboy operators out there along with ones who are offering good deals based on best efforts results.  Can your business afford to depend on such cowboys or promises?

Quocirca has seen some organisation go for co-location as a purely cost-saving exercise.  Like pretty much any activity where the aim is just cost-saving, it can end up costing an organisation heavily when things do not turn out as was hoped.  However, Quocirca does find that when a choice is made based on the right reasons - for example, that the chosen direction is something that the organisation could not do directly itself; or that the chosen supplier has expertise that would be difficult or even impossible for the organisation to source and maintain itself - then the end result is generally very cost effective.

The use of co-location should be a strategically-planned activity.  Due diligence is a necessity - yet for those where it is the first time of looking at such an approach, it is difficult to know what questions need to be asked and what responses should be required.

To help those who are interested in looking at co-location, Quocirca has developed a set of questions and reasons why these are important in conjunction with Datum Datacentres Ltd.  The paper is downloadable free of charge here: http://www.datum.co.uk/data-centre-co-location-wp/

Enhanced by Zemanta

Masters of Machines: turning machine data in to operational intelligence

Bob Tarzey | No Comments
| More

700 million, that's a sizeable number; 2 billion is bigger still. The first is an estimate of the number of items of machine data generated by the commercial transactions undertaken in the average European enterprise during a year. The second figure is the equivalent for a telco. IT generated machine data is all the background information that gets generated by the systems that drive such transactions; database logs, network flow data, web click stream data and so on.

Such data, enriched with more data from other sources, is a potential gold mine. How to use all this data effectively and turning it into operational intelligence is the subject of a new Quocirca research report, Master of Machines. The report shows the extent to which European organisations are turning machine data into operational intelligence.

The numbers involved are big, so processing machine involves volume. In fact it fits all the 5 Vs definition of big data well. V for volume as described above, another v is for variety the range of sources, with their wide variety of formats. If machine data can be used in near real time it gives, v for velocity; it can add lots of v for value to operational decision making. All of which gets an organisation closer to the truth about what is happening behind the scenes on their IT systems; v for veracity, machine data is what it is; you cannot hide from the facts the mining it can expose.

Typically, operational intelligence has been used by IT departments to search and investigate what is going on their IT systems; over 80% already use it in this way. More advanced organisations use the data for proactive monitoring of their IT systems, some providing levels of operational visibility that were not possible before. The most advanced are providing real-time business insights derived from machine data.

To provide commercial insight, the most advanced users of operational intelligence are making it available beyond IT management. 85% of businesses provide a view to IT managers, whereas only 62% currently get a view through to board level execs. In both cases, many recognise a need to improve the view provided. 91% of the most advanced users of operational intelligence are providing a board level view compared to just 3% of the least advanced.

Although there is broad agreement around the value of operational intelligence and the need to open it up to a wide range of management, most are relying on tools that are not designed for the job. These include traditional business intelligence tools and spreadsheets; the latter were certainly not designed to process billions of items of data from a multitude of sources. 27% say they are using purpose built tools for processing machine data to provide operational intelligence. The organisations using such tools gather more data in the first place and will find it easier to share it in a meaningful manner across their organisation.

Quocirca's research was commissioned by Splunk, which provides a software platform for real-time operational Intelligence. Quocirca and Splunk will be discussing the report and its findings at a webinar on April 3rd 2014. Find out more and register HERE.

 

The mobile printing challenge

Louella Fernandes | No Comments
| More

Mobile devices are transforming business productivity. For many, the workplace is no longer defined by the traditional office; employees are now accessing corporate applications, data and services from multiple devices and locations every day. With a highly mobile workforce, organisations need to ensure employees have the same access to corporate applications as they would from the desktop, while protecting sensitive data. One area in need of better control, which has yet to catch up with the desktop experience, is printing.

Most businesses are reliant on printing to a certain extent, and although print volumes are flat to declining, there is still a need to provide easy access to printing for mobile workers. This could be simply being able to send a print job wirelessly to an office printer from a smartphone, sending a print job in advance to an office printer while on the road or allowing guest visitors to print securely from their mobile devices.

Whilst the explosion in the variety of smartphones and tablets used in the workplace is boosting productivity, enabling mobile printing across multiple platforms and printers can prove a real IT headache. Mobility has shifted control of IT from the IT department to the users. In the past the IT department (IT) would usually have complete control of the print infrastructure, managing the installation and deployment of printer drivers. Now, users may be installing their own printer apps, without IT's knowledge and often expecting support for mobile printing from previously unsupported devices. Consequently IT is grasping for any available options that ensure mobile printing is controlled, reliable and secure.

Essentially, there are several ways to print directly from a smartphone or tablet device:

  • Integrated Mobile OS support. This native printing capability most closely matches the "File > Print" Windows desktop user experience. Apple's AirPrint, for instance, is built-in to the OS, making it easy to print to a supported printer or MFP.  Although Airprint is a good tool for local network printing, Bonjour, Apple's automatic printer discovery technology is normally confined to a single subnet, so does not discover printers across broader networks. Various products, including Breezy, PrinterOn and EFI PrintMe Mobile offer automatic printer discovery for Airprint as well as for the Android platform, via an app.
  • Email Attachment. This is a basic approach of sending a document attachment, for instance as a PDF, JPG, TIFF, BMP or Microsoft Office file, to an email address associated with a specific printer/MFP or print queue. While this works for any mobile OS, most mobile printing solutions lack controls for printing options such as number of pages, duplex, colour and multiple copies. Unless integrated with a print management application, there is no way of tracking print usage via this approach.
  • Mobile Print Apps. Many print vendors have their own printer apps which allow direct printing to compatible printers or MFPs on a wireless local-area network.  Mobile print apps can also take full advantage of printer options, so offer more control than printing via an email attachment.
  • Google CloudPrint: enables printing the over the web via Gmail and Google Docs to supported Google Cloud Print Ready printers.  In addition, EFI PrintMe Mobile now offers a Chrome extension that allows direct Wi-Fi printing from Google Docs to any MFP. As above, in order to track and secure printing via a mobile app, integration with a print management tool is necessary.

In response to the lack of standards around mobile printing, the Mopria alliance was established in late 2013 by Canon, HP, Samsung and Xerox. In February 2014, other vendors including Adobe, Brother, Epson and Konica Minolta also joined the alliance. Mopria aims to align standards that make printing compatible from any mobile devices to any printer. Initially support is focused on sending print jobs over Wi-Fi connections or "tap-to-print" through near-field communications (NFC). Conspicuous by its absence currently is Apple, which has bypassed NFC in its new iPhones, in favour of iBeacon technology, which is based on Bluetooth Low Energy (BLE) and has a much longer range than NFC (tens of metres versus a tenth of a metre).

While most printer manufacturers offer a range of solutions based on the above approaches, third party solutions have emerged that offer a one-size-fits all approach across mobile platforms and printer brands. These include EFI PrintMe Mobile, EveryonePrint and PrinterOn. Given these diverse choices, businesses need to carefully evaluate the available options and determine which features and benefits are important. For instance while smaller businesses with a standardised printer fleet may find mobile print apps sufficient for their needs, larger businesses with a mixed fleet (both mobile OS and MFPs) should consider integration with brand agnostic secure printing solutions.

Secure printing, through third party products, which include Nuance Equitrac or Ringdale FollowMe, are an effective approach for larger mixed fleet environments.  When a user prints, the job is held in a server queue until it is released at the printer or MFP after following user authentication (ID badge, or username and password).  This offers a range of benefits, including an audit trail of what is being printed and eliminating paper waste as documents are not left uncollected in output trays which in turn reduces the chance for sensitive documents to be picked up by the wrong recipient.

With many IT departments already stretched, they may struggle to keep up with the demand for supporting printing across new types of mobile devices that are introduced, not to mention the new wave of connected smart MFPs. Many businesses are turning to managed print service (MPS) providers to handle the management of their print infrastructure. More MPS contracts are now encompassing mobile print and handing over this to an expert third party can minimise the drain on IT resources that mobile print could incur.

The IT department cannot afford a half-hearted mobile print strategy. With the right approach, mobile productivity can be boosted while security risks managed.  With BYOD showing no signs of abating, businesses need to act fast and get smart about managing and securing mobile print.

Enhanced by Zemanta






Integrating test data into application life-cycle management

Bob Tarzey | No Comments
| More

It will not be news to anyone that has written code that the time to kill software bugs is early on in the application development life-cycle, before they can have a major impact. This is important for three reasons; first, better code is likely to be released in the first place, second, it will be less vulnerable to attack and therefore more secure, and third, the overall cost of software development will be reduced. All well and good, but how do you get closer achieving the ultimate goal of bug free code that delivers to requirements?

Unsurprisingly, thorough testing of software is a key part of achieving this.  A major challenge for many is how to go about tests without compromising the often sensitive data that must be used to make them realistic. If an application processes healthcare records or credit card data, how can it be tested against meaningful data without compromising the privacy of the data subjects? Ensuring such data is available has been the long term business of vendors such as IBM, Informatica and UK-based specialist Grid Tools.

The testing stage includes checking functionality, the impact of code changes and checking for errors that may lead to security vulnerabilities. The later was the subject of a 2012 Quocirca research report, Outsourcing the problem of software security, sponsored by Veracode (a provider of on-demand software security testing services). One finding of the report was that the average company spent several hours per week patching software; reducing this would save money and reduce risk.

There are three approaches to safely providing the test data that effective tests rely on:

  •  Data masking - where sensitive fields are replaced with dummy data so they can no longer be linked back to actual individuals or account
  • Data subsets - where the test data is processed in such a way that only the data relevant to testing is included and the sensitive data is removed, for example key fields may be all that are needed, not the actual customer data
  • Data simulation - where a whole data set is created from scratch to mimic the real thing. This sounds like the safest approach, but it may miss common human errors that occur in real data sets that may be important for some testing

The provision of test data is all well and good, but how does it fit with the broader application development life-cycle? This includes everything from requirements gathering and specification design, through development, version control, testing, deployment and update. This continuous cycle is one all commercial software should be going through from inception until end of life. Application life cycle management (ALM) tools support all or some of these phases helping to improve software quality. Vendors include Serena Software, IBM Rational, Perforce, CA, Borland and range of open source options.

To extend its reach to other areas of the application life-cycle, Grid Tools has recently announced a new tool call Agile Designer, which will provide feedback to the design phase. This is the artier part of the process often carried out using flow charts created with tools such as Microsoft Visio or PowerPoint. Agile Designer tests the flows and highlights ambiguities thus introducing more rigour in to the process. A key output is to analyse the minimum number of test cases needed to fully test all the possible paths through an application design. This helps with to creation of better test data and eliminates unnecessary use of sensitive data.

Software testing is a potential source of data leaks that is rarely talked about compared to high profile coverage often given to leaks associated with production software; this is not an excuse to ignore the problem. Grid Tools has a long pedigree in producing test data, which is endorsed by its high profile global partners CA and HP, both of which resell its products. The capability it is now providing through Agile Designer to better integrate test schedules into the overall software development life-cycle will further reduce the risk of exposing of sensitive data and has the potential to make the whole development process more efficient.

Enhanced by Zemanta

Sharing NHS data: big data needs big data sets

Bob Tarzey | No Comments
| More

The protection of personal data has been back in the news in the UK over the last month due to the government bungling plans to make anonymised NHS patient data available for research. The scheme gives NHS patients the option to opt out of sharing their data: why? NHS care in the UK is mostly provided free at the point of delivery funded by general taxation (and/or government borrowing), so why should we not all give something back for the greater good, if the government can provide the necessary reassurances?

Anyway, who would be interested in our health records, other those researching better healthcare? Providers of healthcare insurance and life assurance maybe; but we have to disclose even quite mild problems to them to make sure policies are valid, and imagine the damage to the reputation of an insurance provider who was exposed as having misused healthcare records, is it worth their risk? Celebrities and politicians may have a case; in some cases their health history may make interesting headlines. Perhaps they should consider paying the private sector to deal with embarrassing issues?

It is worth asking why any given data set is of interest to people who would put it to unauthorised and nefarious use. Payment card details hacked by cyber-thieves are pretty obvious, they can be readily monetised. Identity data can and account access credentials are worth having, in some cases they can be used to gain direct access to our financial assets or be used to dupe us or others into given enough extra information to gain that access. When it comes to personal data (not to be muddled with intellectual property), unless a cyber-criminal can see a way to monetise it, then it is of little interest, so that ultimately the main target will be payment information.

Hacktivists may see opportunities for bribery in health records, but this is a tricky and highly illegal business for the potential perpetrator, and most of us would be of little interest to them anyway. Journalists may seek out headlines, but again this does not apply to most of us. The phone hacking scandal that bought down the News of the World and is currently making its way through the UK courts is a case in point. The targets were nearly all celebrities who had failed to take the simple step of password protecting their voicemail. That is not to condone anything illegal, but just to point out how easy it would have been to prevent (for example, automating the setting up of voicemail passkeys during initial device set up). One feels most sympathy for the victims of crime whose phones were hacked, who had become of interest to the press more or less overnight.

In the IT industry there is much talk about big data and all the benefits it can provide. Big data processing needs access to big data sets and for pharmaceutical and healthcare research that means patient data. The NHS has one of the largest such data sets in the world, and it has tremendous potential value if handled in the right way. The government needs to do a better job of getting its case across, but its motives are good. Those protesting about the use of anonymised NHS data need to better explain why and when this valuable resource should be wasted when it can be used for the benefit of all.

Enhanced by Zemanta

Rackspace in Crawley - datacentre services trump pharma

Bob Tarzey | No Comments
| More

A visit to the UK base of Rackspace's 'Fanatical Support' group in West London is a truly cosmopolitan experience. Above every Racker's (as it calls its staff members) desk is a national flag representing their country of origin - think UN General Assembly meets the IT Crowd. Some of these Rackers are indeed supporting a growing number of customers based in continental Europe and further afield, especially as the take up of Rackspace's self-service products increases. However, their mixed heritage says as much about London as it does about Rackspace itself; despite its size and growth, the cloud services company's comfort zone is still mainly in the English-speaking world.

This is one of the reasons for Rackspace's decision to focus a huge new round of infrastructure investment in the West Sussex town of Crawley, vastly increasing its UK footprint, rather than expanding into other European markets. The scale of the investment will be greeted by many in Crawley, which lies just south of Gatwick Airport. The project will bring new jobs to the town, making productive new use of a 15 acre brown field site that has lain derelict since being abandoned by a pharmaceuticals manufacturer in 2011 (with around 500 of job losses).

Rackspace did consider locations to the chillier north, even perhaps in the Nordic region. However, the cost of dedicated fibre connections outweighed potential power savings from less spent on cooling and anyway, Rackspace says it likes to be where its customers are, especially with the growing volumes of data being handled. Having decided on South East England, Crawley had the benefit of good transport and data connections and ample power supply, the town not (yet) being a noted data centre hub.

The plan is to develop three data centres on the site, each with a capacity of 10 megawatts (for comparison, Rackspace's Slough data centre, that opened in 2010 is 6 megawatts, the two will be connected by a dedicated metro fibre link).  It is not just the scale of the investment that is of interest, but also the revolutionary design of the data centres. Each will be based 100% on specifications and designs from the Open Compute Project. Rackspace says that this underpins its philosophy that anything that can be open should be open; the company has been a major contributor to the project.

Open Compute includes everything from data centre design and cooling, to servers and operating systems, such as the Rackspace-backed OpenStack that Quocirca covered in another recent blog post. Whilst any organisation can source stuff from, and contribute to, Open Compute, none to date has committed their whole on-going data centre strategy to it in the way Rackspace is planning in Crawley. Open Compute's other major backers include Facebook, Intel, Goldman Sachs and Arista Networks. Microsoft is joining and has donated some power management tools to the project (but not its operating system!)

For a company such as Rackspace, whose core value proposition is selling top quality cloud-based services out of state-of-the-art data centres, it may seem strange to give away so much of its intellectual property. Rackspace says, no problem, we differentiate on top of the stack. Part of that is the 'Fanatical Support' that increasingly includes architecting and coding for its customers. It also includes the flexibility it provides to mix and match private and public cloud resources within and beyond its data centres.

As well as the use of Open Compute there are other innovations aimed at flexibility and reduced energy consumption. The floors are solid concrete rather than raised, all wiring, including power supply will be via structured overhead space. This allows easy changes to the required power mix across racks as more and more are given over to handling that growing demand for storage compared to the lower footprint required for increasingly efficient servers. The data centres will operate at the relatively warm temperature of 29.5 degrees centigrade using non-mechanical "indirect outside air" cooling. This is an adiabatic system (i.e. with no overall gain or loss of heat), whereby the inside air is sealed from the outside and evaporation is used to cool down the outside air which is then used to cool the inside via heat exchangers. It is equivalent to a standard air conditioning unit, but with no compressor relying instead on low-speed, low-power fans. Rackspace and its data centre builder partner Digital Reality Trust (DRT) say this will reduce overall energy consumption by 80%.

This major investment will likely see the UK grow as a percentage of Rackspace's overall business which is still dominated by the USA, where is has data centres in Virginia, Texas and Illinois. Beyond this are facilities in Hong Kong (an English-friendly portal to greater-China) and Sydney, Australia. To ensure it can remain competitive with the likes of Google and Amazon, Rackspace knows it needs to up the ante when it comes to global connectivity. To this end it is looking to partner with its Open Compute friends at Facebook and piggy back on the social network's intercontinental fibre links.

Rackspace may be focussed primarily on the English-speaking world, but its infrastructure and connectivity gives it the global reach to serve certain multi-nationals as well as those in its core markets. The physical location of its data centre may preclude it from providing local services in other markets, but as a though leader and contributor to data centre design its influence will be truly global.


Enhanced by Zemanta

Real-time IT security analytics: the convergence of SIEM and forensics

Bob Tarzey | No Comments
| More

Sometimes technology areas that once seem distinct converge. Indeed there was a time when the term convergence was used, without qualification, to refer to the coming together of IT and traditional telephone networks, something that for many is now just an accepted reality. Two recent discussions Quocirca had brought into focus a convergence that is going on in the IT security space, driven by the growing volumes of security data and the ability to make use of it in real time. This is the convergence of IT forensics and security information and event management (SIEM) on to the area of real time security analytics.

 

First, IT forensics: historically this has been about working out exactly what has happened after a security incident of some sort; preparing reports for regulators or perhaps crime investigators. Specialists in this space include Guidance Software, Stroz Friedberg, Dell Forensics and Access Data, the last of which Quocirca has just spoken with.

 

Access Data is a mature vendor having been around since 1987; its Cyber Intelligence and Response Technology (CIRT) provides host and network forensics as well as the trickier to address volatile memory; processing data collected from all these areas to provide a comprehensive insight in to incidents. With some new capabilities, Access Data is re-packaging this as a platform it calls Insight to provide continuous automated incident resolution (CAIR). The new capabilities include improved malware analysis (what might this software have done already, what could it do in the future?), more automated responses (freeing up staff to focus on exceptions) and real time alerts. This is all well beyond historical forensics moving Access Data from what has happened to what is happening.

 

None of this makes Access Data a SIEM vendor per se; its focus is still on analysis and response rather data collection. Indeed it is partnering with one of the major SIEM vendors, HP ArcSight, for joint go to market. Access Data also says it works closely with Splunk, another vendor that makes its living from gathering data from IT systems, including those focussed on security, to provide operational intelligence.

 

Second SIEM: most vendors in this space come from a log management background. However, over the years, their capabilities have expanded to include data analysis, increasingly in real time. This is an area Quocirca covered in its 2012 report Advanced cyber-security intelligence, sponsored by LogRhythm, one of the leading independent SIEM vendors. Many of the other SIEM vendors have been acquired in recent years, including ArcSight by HP, Q1 Labs by IBM and NitroSecurity by McAfee.

 

This week Quocirca spoke with a lesser known vendor called Hexis which was created when its larger parent KEYW acquired yet another SIEM vendor called Sensage on 2013. Sensage already had a capability to respond to events as well as gathering information on them. It now seems to be positioning itself out of SIEM completely with the release of a new platform, Hawkeye G, which will enable malware analysis, real time response and so on. A layer above SIEM as its spokesman said (but with plenty of overlap). Indeed, it says SIEM vendors are key sources of information, citing Q1 Labs, Red Lambda, ArcSight and Splunk.

 

So, the good news is, if your organisation, as many seem to be, is concerned about resolving the problems caused by IT security breaches, or indeed, putting in place another layer of defence to help prevent them, there is plenty of choice. The bad news, that as vendors that once seemed to be doing distinct and different things start to sound the same, which end of the spectrum to you start from?

Enhanced by Zemanta

Can the office printer stay relevant in the mobile workplace?

Louella Fernandes | No Comments
| More

While mobility and the cloud are transforming today's workplace, many businesses still operate a hybrid mix of paper and digital workflows. The transition to Bring Your Own Device (BYOD) places more demand on the need for an accessible, reliable and secure print infrastructure. This is vital in boosting mobile productivity and collaboration whilst ensuring document security and compliance.

The key component of a secure and reliable print environment is the smart multifunction printer (MFP) which has evolved from a peripheral print and copy device to a sophisticated document hub that can be integral to business processes. The emergence of advanced integrated software platforms turns the MFP into a powerful productivity tool, effectively bridging the paper and digital divide.

At one level, it may be as simple as using the MFP to scan a document and store it in a cloud service such as Google Docs or Evernote. At an enterprise level, the MFP can be used to connect and route documents across existing enterprise content management (ECM) technology, improving productivity and efficiency of paper-intensive processes. Think scanning an expense receipt at an MFP which automatically routes the information to accounting for approval. Or the scanning and routing of new account applications from MFPs in remote bank branches to centralised repositories.

Smart MFPs not only enable increased productivity by speeding up paper-dependent business processes but also allow a distributed mobile workforce to collaborate and work more effectively. For instance, most Smart MFPs enable mobile workers to print directly from their mobile device by authenticating at any MFP on the corporate network using either a user ID/password or ID card authorisation. As more employee-owned smartphones and tablets proliferate in an organisation, security and the tracking of the printing of documents from these devices becomes ever more important.

With the majority of vendors expanding their smart MFP portfolios - including Canon, Dell, HP, Konica Minolta, Kyocera, Lexmark, Ricoh and Xerox - what should businesses look for when evaluating a cloud connected Smart MFP?

  • Multi-layered security. MFPs should be treated like any device connected to the IT network and has to be considered as suffering from the same vulnerabilities, as well as some specific to the device itself. Areas to evaluate includes hardcopy security, ensuring documents are only released to authorised users and not left unattended in output trays; and hard disk security, such as hard disk drive (HDD) encryption and data overwrite security. Look for MFPs that are compatible with network security protocols such as Secure Sockets Layer (SSL) protocol, IPsec and SNMPv3. All printing, copying and scanning should be tracked and monitored for auditing purposes - typically available through either device logs or more comprehensive reporting tools.
  • Flexible mobile printing. Smart MFPs should offer flexible, user-friendly and secure ways to print directly from smartphones and tablets across all mobile operating systems.  Look for compatibility with AirPrint, Google Cloud Print and third party solutions such as Cortado and EFI PrintMe Mobile.
  • Cloud enabled. Most MFPs support scanning to multiple destinations, including scan to email, scan to FTP, scan to fax and scan to network. Smart MFPs will also offer the ability of sharing documents to the cloud. Look for MFPs that also connect to the most popular cloud services such as Google Docs/Drive, Evernote, SharePoint Online, Dropbox, Salesforce.com and Office 365. However, also make sure that this is fully audited and that accounts used can be controlled - scanning direct to cloud in an uncontrolled manner is an ideal way for a disgruntled employee to copy intellectual property into their own environment outside of the organisation.
  • Customisation. Embedded software platforms enable customised workflow icons to be created on the MFP panel, making it easy for users to capture and route documents to the relevant enterprise application. These custom scan-to-workflows allow users to eliminate multistep, manual and time-consuming scanning-and-processing tasks.
  • Document conversion. Look for OCR scanning into native formats where documents can be scanned into editable word processing documents, spreadsheets, and searchable PDFs. For instance capturing paper-based financial statements and converting and sharing to Microsoft Excel format. Some MFPs perform OCR scanning in the cloud eliminating the need to install additional server-based technology.  This may raise security issues as OCR in the cloud will involve the document's content being moved into the cloud: the stream of data in both directions needs to be encrypted, and there must be guarantees around how the data is stored and securely erased.
  • Flexible remote administration. Look for an integrated solution that handles the single solution for device management, print queue management, and accounting. Centralised web-based device management helps network administrators to control all network enabled printing devices.

For businesses operating a larger shared MFP fleet, they should consider using the expertise of a third party managed print service (MPS) provider.  MPS can help organisations to reduce costs by consolidating outdated, energy-inefficient devices in favour of MFPs, minimise downtime and improve productivity through advanced software solutions. Consider independent providers that can advise on the right combination of hardware, software and services that suit a particular business need.

Like it or not, paper documents will be around for some time with many businesses continuing to rely on them to support certain business processes. Businesses should leverage the MFP to improve businesses processes that are paper dependent. With careful evaluation of how they can integrate with existing enterprise systems, MFPs can play a vital role in today's mobile workplace by improving productivity, increasing collaboration and reducing the burden on IT staff and budgets.







I am not a dog, FIDO a new standard for user authentication

Bob Tarzey | No Comments
| More

Here's a dull sounding question; 'imagine a world without SSL (secure sockets layer) or its successor TLS (transport layer security)?' A security tech-head may find the whole thing quite interesting, but for the average IT user, despite relying on SSL day-in, day-out, it will not arouse much excitement. SSL just gets on and does the background task of ensuring we can all securely access web sites and applications over the public internet and keep the data we exchange with them private. Without SSL (or something similar) there would probably be no internet banking, no e-commerce; in short, no internet revolution.

 

That said, there are limits to the level of security offered. We trust resources accessed via secure protocols because things look right. The way URLs are displayed changes, padlocks appear, constant reassurance is offered that all is well and sensitive information will be safe. We can still be duped by spoof sites, but these will not be giving the same security assurances due to the due diligence of the authorities that issue SSL certificates. So, users feel confident to transact. But, what about the other way around, how can providers of online services be confident that we are the users we say we are?

 

The truth is, often they cannot, beyond checking basic login credentials, usually just a username and password which most agree is not a safe enough way of authenticating users. However, there is a growing range of other options we can choose to use to identity ourselves. Mobile phones can be used to issue one time passwords and hardware tokens can be issued by service providers. The use of biometrics is becoming easier and it is not just fingerprints (for which the availability of built in readers is limited). Any biological or behavioural characteristic can potentially be used for identification, for example voice pattern recognition (most devices can already hear you), face recognition (most devices now have cameras), or even recognising the way you type on a keyboard.

 

Providing a standard way for all these various methods of authentication being used has been a long time goal to provide higher levels of reassurance to online service providers. The latest attempt to do so is a prototype industry standard dubbed FIDO (fast ID online). Here is how it works; you request a service and, as a session is established, the service seeks to authenticate you using a local credential. If you have a (free) FIDO client installed it will ask for a means of authenticating you to the device you are using. This establishes a 'key pair' and unlocks a local private key to authenticate against a public key hosted on a server at the online service provider (i.e. it is all based on Public Key Infrastructure/PKI). Each time you use a new device you go through the process again. The key pair is a means of authentication for to the service in question for the user on their current device. If FIDO is not installed, weaker means of authentication can be fallen back on, or it can be insisted that the FIDO client is installed. In other words, if the backers of FIDO succeed, over time service providers may see that it becomes the dominant standard for secure authentication, just like SSL has for sharing data over the internet.

 

To those in the know this may sound familiar; this is not the first such attempt. For example, Entrust's Identity Guard Platform, which can map 17 means of authentication to supporting services, and Symantec's Validation and ID Protection (VIP) Service (based on its 2010 Verisign acquisition) are both based on a reference-architecture know as OATH (Open AuTHentication). OATH, which was primarily aimed at handling one time passwords, uses several protocols depending on the means of authentication. FIDO is based on a different reference architecture known as UAF (universal authentication framework), all you need is the FIDO client, regardless of the means of authentication. The biggest step change that FIDO introduces is the simplicity and ease of use on the device; it is transparent to the users, all they need to know is how to create the credential (i.e. speak to the microphone, smile at the camera etc.)

 

For a protocol to succeed it needs backers and the FIDO Alliance already boasts 100 paying members. 17 of them are top level board members paying $50K/annum. They include online service providers such as Microsoft and Google, payment providers including Discover, MasterCard and PayPal, device manufacturers such as Lenovo and Blackberry and security companies such as EMC/RSA (which amongst other things, supplies hardware tokens). Non-board level 'sponsors' include a spectrum of vendors involved in identity and access management. Others, Quocirca has spoken to that are watching with interest and may well join include Symantec and ForgeRock. Further support has just emerged with the announcement of an agreement between FIDO and the Cloud Security Alliance (CSA).

 

Service providers are interested for all the reasons outlined already; they want to be sure of who their users are and for their users to feel confident to make easy and secure use of services. For security companies, they want to be there if FIDO takes off and likewise for device manufacturers, they may be able to get a short term competitive advantage if they are FIDO enabled (aka iPhone 5s Touch ID).

 

Another board member not mentioned above is Nok Nok Labs. It has been the driving force behind FIDO. Whilst FIDO is aimed to become a free to use, open standard (currently you have to be a FIDO member to get commercial implementation rights), Nok Nok hopes to be rewarded for its effort by providing off-the-shelf software for linking online services with users and establishing key pairs, simplifying the use of FIDO for providers of internet servers who would otherwise have to build their own servers. Nok Nok also hopes to work with partners who could provide on-demand FIDO servers based in its technology.

 

Way back in 1993, when the web was still a wild frontier a New Yorker magazine cartoon famously quipped, 'on the internet no one knows if you are a dog'. If Nok Nok and its friends have their way, those days will seem even more distant as FIDO will be on guard making sure we all are who we say we are.

 

Co-location: Great idea; so easy to get wrong.

| No Comments
| More

Convergence.  Rationalisation.  Virtualisation.  Consolidation.  Cloud computing. Just a few of the changes to computing in the past few years that will have impacted your use of a data centre.

Many are now realising that trying to design and manage a data centre for a single company is becoming a great way to throw money away, and are looking at what other options are available.  For some, public cloud computing is an option; for the majority, there is still a need to be able to run some systems on their own equipment - somewhere.

This is where co-location comes in.  Co-location providers have the task of building, maintaining and running data centre facilities; their customers have the job of installing, maintaining and running their own IT equipment within it.  The economies of scale that the co-location provider has means that they should be able to keep up to date through sharing costs across a large customer base in areas such as high-density cooling, power distribution and auxiliary power needs - which are all high capital cost systems for an organisation looking at doing this just for themselves.  The provider will also be able to put in place better levels of connectivity redundancy, leading to higher overall systems availability and often a better speed of response to end users.

Co-location is a booming business, and rightly so.  There is decreasing sense in attempting to build a facility where the horizon on what capabilities it is being built for can only be measured in the low number of years, rather than the 10-20 years that used to be the case.

However, choosing a co-location provider is no small matter.  Even though the idea is that the data centre is just a shell in which the important IT equipment is held, the cost to a business of changing co-location provider can be horrendous; not just in physical cost, but also in impact on business continuity.

It has to be accepted that not all providers are the same.  Many are competing on price, and this generally means that they are having to cut corners somewhere.  Others are white labelling space within a third-party's facility.  Many larger providers cannot get enough direct large end user customers themselves to make their facilities financially viable.  Therefore, they rent out parts to another provider, who can then rent out smaller parcels still to its customers.  This way, a large provider with a focus on large companies can still have a large number of SMEs in their facility - it is just that these accounts are being managed by an intermediary.

Such multi-tier agreements have their own problems.  When something goes wrong, your account manager may have no idea what the underlying problem is.  They are themselves a customer of the actual facility owner, and they are just as much at the end of the phone wanting details on when things are going to put right as you are.

Then there is the issue that Quocirca still finds is top of mind when it comes to using outsourcing of any nature - security.  Quocirca firmly believes that the majority of service providers out there (whether they are co-location or cloud providers) will have better overall security than the majority of organisations can achieve in-house.  However, this is not saying that the outsourcing company's security is enough - if you have distinct worries about the security of your systems and the data held on them, you need to look for the right co-location partner that meets your needs.

Consider a facility that is built within a standard industrial park.  It may have all the security features that you would want within the facility itself - but what about around the facility?  The presence of vans driving around or parked up around an industrial park cannot raise much in the way of interest, as this will be happening all the time.  That within these vans could be the bad guys watching how security is managed within the facility should raise issues for you as the customer.

Now consider a facility that is purpose built within a secure park.  This secure park has solid perimeter security - no vehicles or people that are not expected are allowed onto the park at all.  All movement is monitored and visitors logged - before they even get anywhere near the data centre.  Such facilities are few and far between, but offer much better levels of security that can meet the needs of organisations with higher levels of needs for legal reporting or intellectual property management.

Finally, look to how account management is carried out.  If you will be provided with a general number to contact the provider with to discuss anything, you are just a customer.  If you have a named service manager who has total responsibility for all your dealings between your organisation and the provider, you stand far more chance of being viewed as a partner - and having organisation's needs and problems dealt with more effectively.

Getting the choice of co-location partner wrong can be catastrophic.  Quocirca has created a guide with Datum to the areas that anyone considering co-location should ensure are covered in discussions with any potential provider. The guide is available free of charge here.

Enhanced by Zemanta

Mobile security - there's something in the air

Rob Bamforth | No Comments
| More

In the physical world, 'security' for any business involves a whole spectrum of defensive measures; choosing a safer location, putting up warning signs and notices, monitoring and cameras, locks, identity passes, alarms and ultimately, insurance. IT really should be no different, and many do apply security measures, although pre-mobility and the internet, most organisations would have relied heavily on physical security to defend their IT, in addition to some form of system login, particularly on larger systems.


The easy availability of de-mountable, low cost media (floppy disks and things that followed like memory sticks) heralded the start of the malware industry - viruses and their counterpart anti-virus software - although some might remember earlier, less virulent infections such as the 'cookie monster' on Multics in the late 1970s.


Add networks, especially an open one like the internet and all hell breaks lose, with a range of excellent delivery mechanisms for all forms of malware and packet borne attacks. Even here there is still a physical barrier relatively easily at hand; pull the connection cable.


Mobile is, however, a different matter. It is not just that the devices are small, easily secreted, lost or stolen, or that they are now powerful computers with massive storage capabilities (the vulnerabilities of these attributes do of course need to be protected in some way). Nor is it that they are entirely personal devices, far more so than any 'personal' computer, in that for most people they are an extension (or even proof) of their identity - and protecting the perimeter of identities of individuals and for organisations is becoming a huge issue, as has been noted in other Quocirca reports, such as "The identity perimeter".


No, the major problem is radio waves - they can't be seen.


NFC (near field communications, used in payment cards), RFID (radio frequency identification chips used for tagging), Bluetooth, Wi-Fi and even cellular connections all open up invisible vulnerabilities and typically most if not all of them are present on smartphones and tablets.


There has been reasonable security awareness of the risks of Wi-Fi and at one time many businesses took what they thought was a safe option and didn't support wireless networks on their premises. How naïve they were. Many early smartphones had Wi-Fi and could act as cellular to Wi-Fi routers - a fact that for a while encouraged operators to charge for 'tethering' services as users connected laptops and then tablets via a personal hotspot and their smartphone.


Around the same time the cost of Wi-Fi access points fell to be affordable for many consumers and even coffee shops initially thought they could make money out of connectivity rather than ground beans - now free in most places of course. Enterprising employees might have also added an unofficial home access point to make the office environment a bit more flexible (and vulnerable). Eventually, most organisations realised they needed to do something to protect themselves; with wireless security solutions they could identify rogue access points and everyone one again felt safe.


Again, fine as far as it goes, but with new applications for IP based wireless technologies appearing all the time - internet of things, wearables etc - more devices are appearing and communicating with each other so wireless protection will need to be stepped up.


In addition to the challenges of Wi-Fi, there are also significant risks in apparently short-range wireless technologies, especially for those who do not understand the consequences of the term 'high gain antenna'. Many would expect low power wireless such as Bluetooth to be very short range, but experimental projects such as those using 'Bluesniping' demonstrate that this is not the case as a range thought to be only a few metres can quickly be extended to hundreds of metres (one claims a range of around a mile). Think you are secure? Next time you are in a crowded space like a train carriage take a look at all the Bluetooth devices you can see and names of people that you can pick up - does your mobile broadcast your name? Again with more devices such as wearable fitness bands adding Bluetooth as a way of pairing with smartphones, the airways are getting even more crowded.


Other short-range wireless communications come with similar problems. NFC and RFID tags appear to have such a short range - even marketed as tap and go - that they appear that they must be secure from snooping, but with the right equipment this range limitation can be overcome and information gathered from a distance. The protocols and systems that exploit the value of these technologies need to be robust, especially for applications such as payments.


Even with good systems there are other pitfalls since wireless knows no bounds as some London commuters will no doubt have discovered. Now that wallets and purses contain multiple NFC cards, who decides which one will 'win' the transaction? Transport for London has warned passengers they might need a 'change of behaviour' to ensure that for example their Oyster card is used and not one of their bank cards. Here the systems might be secure, but there are unintended consequences of the lack of radio barriers.


Does this mean surrounding everything in a Faraday cage or in a metal box or under tinfoil hat?


No, at least not unless it is something very secret, but organisations and individuals need to be aware that radio waves can be intercepted or broadcast more widely that expected. This is a first point of vulnerability or defence that can and should be addressed. Many more should be thinking about managing connectivity better at the radio end, such as being much more careful with switching wireless services on or off - e.g. Bluetooth - or investigating protective technologies that push the control of Wi-Fi access right to the radio edge in the access point.


In any event, with employees bringing their own (mobile) devices, using public networks and broadcasting or listening to everything over the airwaves, organisations need to apply much more rigour to securing applications and data.  What was once simply seen as mobile device management should now be much more focused on mobile application, data and usage management.

Enhanced by Zemanta

How better CMO/CIO alignment can bridge the print/digital gap

Louella Fernandes | No Comments
| More

The traditional boundary between chief marketing officers (CMOs) and chief information officers (CIOs) is blurring as marketing and technology increasingly overlap. One area where better alignment can transform a business is improving customer engagement. An increasingly important way of achieving this is through effective multichannel communications - producing and delivering consistent and personalised messages across mobile, online, social and print channels.

Clear and relevant customer communications are critical to building trust and long-term customer relationships. Today's digitally empowered consumers expect consistent and relevant communication, regardless of the medium used. However organisations can struggle to co-ordinate communications across the multitude of channels available.

In many cases different marketing and IT stakeholders create and manage print and digital communications, using multiple disparate systems. This means that many businesses may lack a commercial perspective on how integrated printed and digital communications can drive better customer engagement.

This siloed approach leads to ineffective communications characterised by poor brand consistency, higher customer care costs, compliance issues and, ultimately customer dissatisfaction and attrition. For instance, today's savvy consumers are more likely to purchase from retailers that personalise across channels and many will disconnect from a brand if they perceive its message to be irrelevant. In a recent CMO Council study, more than 90% of consumers polled admitted they had unsubscribed from brand communications because the message was irrelevant to them.

Examples of personalised and relevant communications include:

  •  cross-media campaigns that convey a consistent, personalised message across media such as postcards, emails, web and video
  • personalised offers based on purchasing history designed to re-engage previous customers
  • bank statements including messages tailored to the customer's banking habits or financial investments.

So how does print fit into the marketing mix? Certainly, the ubiquity of mobile and online communications means that print now has to fight harder to retain its place. However, the tangibility of print gives it a unique power to engage and, when integrated with online channels, can be an effective element of multichannel communications.

printed communications can be made dynamic and interactive through the use of QR codes, pURLs (personal URLs) or augmented reality (AR), driving customers to online and mobile channels. For example, adding a pURL to a statement or invoice to cross-sell other financial services or products.

One approach to integrating print and online communications is through a customer communications management (CCM) platform. By leveraging existing customer data, a CCM platform makes it possible to create, manage and track communications consistently across each channel. Key components of a CCM platform include:

  • data integration - data access tools can be used for customer segmentation and marketing campaign analysis
  • document composition - content for a wide range of document types, for example letters, direct mail and statements, can be designed for delivery via paper, email, SMS or online channels
  • campaign management - marketing messages can be customised and co-ordinated across print and online channels, using business rules and predictive analytics to define target customer profiles.

Cross-media services that utilise CCM products are available from providers including Canon, HP, Pitney Bowes, Ricoh and Xerox.

Personalisation goes hand-in-hand with analysis of the big data captured through all customer interactions, including online purchase data, click-through rates, social media interactions and geo-location data.

By combining insight derived from big data with an integrated marketing strategy, organisations can develop the marketing nirvana of a 360 degree view of their customers. This helps to drive better engagement with customers, improves customer retention and loyalty and makes it easier to optimise marketing spend across multiple communication channels.

As marketing becomes more technology-driven and IT becomes more commercial, the future of effective integrated multichannel communications will rely on bridging the CMO/CIO divide. Print needs to remain part of the communications mix but it must be integrated and relevant. The key is to avoid a disjointed approach to print and digital communications and to use print to complement and re-enforce online communications.

CMOs and CIOs must both share the same strategic goal to be revenue generators, not cost centres. Similarly, print should be viewed not as a sunset technology that should be avoided at all costs, but as a potential generator of new revenue that can complement and integrate with online channels.

Organisations may feel unprepared for the integration of print and digital, but a CCM infrastructure is capable of supporting a consistent and dynamic interaction with every customer across every channel.

Enhanced by Zemanta

Progress at CA Technologies - Looking good

| No Comments
| More

At a recent CA Technologies (I'll just refer to them as "CA" through the rest of the blog) analyst event, CEO Mike Gregoire gave a "state of the nation" address to the assembled industry analysts.  In an open (mostly under NDA) and wide ranging talk, he addressed several issues head-on.  It is only a little over a year since Mike took on the mantel of CEO to redefine CA and try to reinvigorate it as a brand - and  develops, packages and sells its products.

With a background of working for large and small companies (EDS, PeopleSoft, Taleo), Gregoire brings in new insights - but still has a steep hill to climb.  One interesting aspect of Gregoire's talking style was a highly unusual modern usage of "I" and "me" in the discussion.  Not "We will be addressing..." or "My team will be..." but a personal commitment that the outcome for CA will be on his watch, through a lot of his doing.  Sure, the "team" will be a core part of the implementation of what will happen, but without saying it quite so starkly, Gregoire made it apparent that, even though he joined CA due to seeing a lot of good stuff within the company, he has felt the need to grab CA by the back of its neck and give it a good shaking.  Looking to more of a SaaS and mobile focus and playing to the big data/analytics markets, Gregoire is also dealing with some of the issues that have dogged CA in the past.

Firstly, CA has been well known throughout the industry for stuffing deals.  Enterprise licence deals were pushed by large parts of its sales force in order to close deals.  This resulted in a lot of customer shelfware - and bad feeling about stuff paid for but not used, with contracts that could not be changed.  Gregoire is now focusing on making sure that a deal is a deal: only the software that a customer needs will be sold to them - and it will be sold on proper commercial terms.  Sure, negotiation is still possible - but massive sweetheart deals will be out the window. Sales cycles will be speeded up through greater adoption of SaaS-based products, better training (and culling) of its own salesforce and its channel as necessary, ensuring that CA gains the revenues that it requires to be continually viable in the market.

Secondly, the product portfolio has to be better managed.  This still seems to be an area under development.  There are two basic approaches: create offerings to the market that have some "umbrella" capability and then use professional services to put together all the different components from the portfolio to make the umbrella service work, or aggregate and consolidate the existing portfolio to have fewer but more functional offerings.  Both come with one massive issue that the financial markets struggle with - it is pretty difficult to get a customer to cough up the same price for a bundle as they would for all the component parts.  For example, if a "solution" for dealing with DevOps requires 30 different CA components, a prospect gets confused with all the permutations and will just go for the bits that they can understand - and afford.  If the components are replaced with a complete single monolith of a system, then the prospect still can't afford all 30 bits of functionality that are now within the system - but they cannot choose which bits to leave out.  If it is a bundled offering, then there is more opportunity for bits to be left out - but the bundle price will still be high - and the complexities of putting everything together still remain, even if it is CA that will pull everything together, rather than the customer.

But, if you reduce the price to make either of the packaged approaches more appealing, then your overall revenues and profit could take a hit, unless the amount that is sold ramps up appreciably.

 Here, Gregoire and other CA executives intimated that big changes should be expected.  The internal sales force is being reviewed as to its capability to manage these new approaches - and this will also be reflected in the channel.

 

Marketing around brand value and capabilities is being ramped up so that more people can see that CA is no longer just "the mainframe management company", but that it has a raft of new services and approaches for the modern world.  A high-profile ad campaign built around "CA at the Center" has been launched at several major airports around the world to drive awareness to major decision makers - after all, airports are where they tend to be for large parts of their life. This will also be pushed out on-line to capture the eyeballs of as many people as possible.

Along with other details presented to us, I would say that this would seem to be a new CA that we are looking at.  Whilst full marks must go to last-but-one CEO, John Swainson, for putting the company into a position where it could be picked up by the scruff of the neck and shaken, Gregoire looks like he is the CEO CA now needs - a straight-talking, hard-headed person with strong ideas and the capability to push them through.

Obviously, the proof is in the pudding.  Only one year in, it is difficult to discern exactly how well Gregoire is managing to push ahead.  The discussions with the other executives sounded positive: Gregoire seems to be respected by those reporting to him.

Looking at positive actions to date, Gregoire has pushed through a different approach for CA already - CA Nimsoft Monitor Snap (not exactly the most mellifluous naming of a product) is available on a freemium basis of try and use, and buy if you need more storage or other resources.  This sales approach would have been anathema to CA before - we can expect to see more in this mould as CA moves forward. Newer systems, such as CA DCIM, based on pulling together several capabilities of different packages to attack a different market, are already winning news sales - most noticeably with a major win at Facebook.

Overall, there is good progress under the new management at CA.  Another year will allow for a more in-depth review on the success - or otherwise - of Mike Gregoire.

Enhanced by Zemanta

Have you entered our awards yet?

Find recent content on the main index or look in the archives to find all content.

Archives

Recent Comments

  • Clive Longbottom: After a discussion with CA Technologies, I would just like read more

Dilbert

 

 

-- Advertisement --