Maksim Kabakou - Fotolia

Security Think Tank: Ensure incident response in the face of inevitable messaging leaks

What criteria should organisations use to assess the security of smartphone messaging apps and how can they ensure only approved apps are used by employees?

It is funny how history repeats itself. About 20 years ago, there was a similar debate about the use of instant messaging apps – such as ICQ, Sametime and AOL Instant Messenger – and the challenges they posed to security and compliance efforts.

Of course, the difference was that those instant messaging apps resided on desktops, so organisations could control their use through whitelisting or blacklisting the apps, closing ports on the firewall (although that may have had little impact) or running their own service.

Eventually, these apps were built into office productivity suites (such as Skype) and so fell into the orbit of desktop group policies, employee contracts and acceptable use policies. As a result, the explosion of smartphone instant messaging apps (and you could stretch this classification to include the humble SMS or text message), therefore presents us with similar, but also different challenges.

The most difficult to solve, in many cases, is controlling employee use. Typically, the smartphone is employee-owned and therefore out of the organisation’s control. Users can install and use any software they like, and contact anyone they wish.

At this point, security awareness and education are the first line of defence by educating the user about the risks of using these apps for work-related matters – backed up with a smartphone acceptable use policy and remote wipe functionality. Of course, if the device is owned by the organisation, then such apps can already be installed, made accessible through an organisation app store, or their use banned.

Read more Security Think Tank articles about securing messaging apps

Examples of the criteria that can be used to choose such an app are given below. Can the organisation:

  • influence the provider to make changes to raise the security of the app and related infrastructure?
  • host the app itself or does it have to be supplied by the provider?
  • audit the security of the provider regularly?
  • negotiate contractual terms to include reporting of compromise or breach or other failure at the provider?
  • create and administer its own contacts, contact groups and lists through a management or similar console?
  • control the attachment of photos and documents to messages?
  • wipe the app and associated data from the smartphone remotely?
  • record messages, attachments, logs, use statistics or similar metrics for compliance, analytical and security use?

In all cases, the organisation should accept that confidential or sensitive information will appear on the apps, either by accident or design, and should have incident management in place to deal with this information being compromised and made available in the public domain.

This was last published in December 2017

CW+

Features

Enjoy the benefits of CW+ membership, learn more and join.

Read more on Hackers and cybercrime prevention

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close