Syda Productions - stock.adobe.c

Integrating security with robotic process automation

RPA’s power is that it can mimic human behaviour. Enterprises must secure robotic automation with this principle in mind to avoid security failures

Robotic process automation (RPA) tools can handle sensitive enterprise data. This may involve copying and pasting account numbers and amounts from invoices to payment systems.

As a result, a bot has privileged access to enterprise systems and resources. Sensitive data can be exposed to attackers, and especially insiders, unless proper security measures are in place.

For example, RPA bot credentials or customer data that the bot handles could be exposed. In the case of fraud risks, insiders could take advantage of the RPA access rights to insert fraudulent actions into the RPA scripts that are run. So proper governance, including security, is essential.

Security leaders need to treat RPA as an approach to automating business processes, not just a recorder and launcher of scripts. Once deployed, RPA becomes an integral part of the enterprise infrastructure, and its security should also be integrated into enterprise security.

Price and functionality

Organisations do not select RPA tools based on security features, but rather on price and functionality. Only after the selection process and during implementation do organisations ensure basic security is in place, such as the encryption for the data the tool handles.

Gartner recommends an assessment of the RPA tool from a testing supplier should be a requirement in the selection process. RPA tools often provide assurance that they have been tested for vulnerabilities from an application security testing supplier.

This assessment report should be required. If proper security vetting of the RPA tool is not performed, it can leave security holes in the implementation. Whenever there is an RPA security failure, the security team will need to review the log files.

Full audit trail

Certain security features in RPA implementations cannot and should not be provided via third-party tools. While third-party auditing tools can be used, ideally the RPA tool should generate the log itself, since it has full visibility of the actions it has taken in the applications it has accessed.

Other enterprise tools could leave gaps where they do not have visibility or compatibility with applications.

The log, or audit trail, of RPA activity is paramount to ensure non-repudiation. Without it, it is not possible to conduct an investigation. The RPA tool must be able to provide a complete, system-generated and immutable log of its activity.

Enterprises typically feed RPA logging to a separate system where the logs are stored securely and are forensically sound, such as a central log management. The log must be complete, as gaps would hinder any investigation, or make the security team miss important alerts.

The log should be system-generated and must also be integrity-protected to ensure it is immutable. One way to do this is by signing the log. To ensure script integrity, the log should also take into account changes made to scripts by developers or other parties.

Do not re-use human credentials with bots

Bot operators are employees responsible for launching RPA scripts and dealing with exceptions.

Sometimes, in the rush to deploy RPA and see immediate results, enterprises will not distinguish between the bot operators and the bot identities. The bots are run using human operator credentials.

This configuration makes it unclear when a bot conducted a scripted operation versus when a human operator took an action. It becomes impossible to univocally attribute actions, mistakes and, most importantly, attacks or fraudulent actions.

The other issue that arises from re-using human operator credentials with bots is that administrators will tend to keep passcode complexity and frequency of rotation to a minimum.

Administrators are limited to what is reasonable human user experience, rather than what a bot can handle. This eases brute force attacks and consequent data leakages.

Read more about RPA security

Robotic process automation can revolutionize enterprise workflows, but if RPA security risks aren't controlled, bots could end up doing more harm than good.

RPA security risks often take a back seat to the technology's promise of costs savings and reliability. HfS Research's Saurabh Gupta and KPMG's Martin Sokalski set the record straight.

Instead, Gartner recommends assigning a unique identity to each RPA bot. Bots should have dedicated identification credentials whenever possible.

Identity naming standards should also distinguish between human and bot identities wherever possible. There is not one single right way to implement this in practice. One example could be assigning B-123 as an identity for the bot operated by an employee with the identity E-123.

Ultimately, audit trails (logs) should provide the information that user E-123 asked bot B-123 to carry out task X.

An exception to this rule is constituted by some use cases (such as call centre operations) where human users leverage robotic automation technology on their computer to automate specific operations that are part of larger manual process.

This is sometimes called robotic desktop automation (RDA). In those cases, it may prove difficult to avoid reusing user credentials.

Tightening RPA data access

Some organisations have expressed concerns about allowing RPA to modify databases directly. This could lead to data tampering, but most importantly to data corruption.

Where a user interface is available for database access, it should be leveraged, even though it may slow down the bot. Alternatively, tools such as database activity monitoring in front of the databases will provide monitoring.

Most RPA tools provide role-based and resource-based access controls to restrict access to RPA functionality. RPA tools can also integrate with enterprise directory services, restricting access to enterprise resources and assigning account privileges correctly.

Gartner urges IT departments to avoid using free versions of RPA tools with production data. Often, free versions of RPA tools are intended only for trials and do not provide security functionality. These versions may render any data used with them public, so they should only be used as trial tools with test data.

Overall, Gartner recommends that security leaders restrict RPA access to what each bot strictly needs to conduct the assigned task – for example, an RPA script that copies certain values from a database and pastes them into an email.

The bot operating the script should only have read access to the database, rather than write access.

Line of business

Most RPA initiatives are led by the line of business. IT and security teams are consulted sporadically during development, if at all. The initiatives are run by the line of business, and later on down the road might be handed over to IT.

We’ve found that establishing a common language and an ongoing dialogue between the security team and the line-of-business team that leads the RPA initiative is essential.

This may entail establishing a risk framework, whereby each RPA script is evaluated in terms of risk.

This article is based on an excerpt of Gartner’s Top four security failures in robotic process automation by analysts Dionisio Zumerle and Cathy Tornbohm.

Next Steps

Gartner forecasts RPA software growth, $1.58 B market

Read more on Artificial intelligence, automation and robotics

CIO
Security
Networking
Data Center
Data Management
Close