Weissblick - Fotolia

Security Think Tank: Automating basic security tasks

How can organisations evolve their security operations teams to do more automation of basic tasks and cope with dynamic IT environments?

Previous Think Tank articles have covered subjects such as automated security testing, incident response plans and security analytics. This article will address how the automation of basic tasks can help (security) operational teams deal with ever more complex and dynamic environments, and in the process give security professionals time to address difficult or unusual incidents. 

It has to be said that many companies are too small to have dedicated security operations teams, or even a dedicated security professional, and therefore will have to rely on external help. Even so, automation will prove to be a major benefit to the small to medium-sized enterprise (SME).

What are the “basics” and what do we mean by “automation”? 

The basics covers such things as:

  • Detecting problems with network access – for example, bad user log-ins, users accessing the network from unusual locations or at unusual times, user accounts trying to access resources they are not permitted to use;
  • Detecting resources problems – for example, servers with an unusually high CPU and/or usage, disk performance issues, users experiencing poor performance or being unable to access files;
  • Unusual network activity – for example, unusual or unexpected data flows, overloaded Ethernet switches, systems or services “disappearing” from the network. 

The corollary to event detection (of the basics) is event recording and reporting mechanisms. Generally, this will include:

  • Configuring a server’s operating system to capture essential events – for example, a failed log-on due to bad password;
  • Configuring where the log files are to be stored – on server or a separate log file server.
  • Setting up the reporting mechanism – for example, simple network management protocol (SNMP) V2, SNMPV3, community strings, Management Information Base (MIB).

Don’t forget that a server itself may be virtualised, so the virtualisation software will be generating logs that need capturing along with the log files being generated by network devices such as firewalls, Ethernet switches, routers and load balancers. Also, some analytic products have agents that can be deployed to servers and applications.

Automation comes down to the use of analytic software – for example, a commercial or open source product to take the log and audit files being generated across the IT infrastructure and combining them to form a uniform view of its state and issuing alerts and reports

Configuring what events are being captured from within the IT infrastructure goes hand in hand with configuring alert and reporting mechanism on the analytics product and it will take a few weeks to a few months to tune things to an effective, usable level. 

To be effective, alerts need to have a priority from urgent through moderate to minor, and the alerting mechanism should reflect the urgency. For example, an urgent alert would be splashed up in red on a monitor screen and additionally issued as a pager or SMS alert, whereas a minor alert could be sent as an email. Daily, weekly and monthly emailed reports would summarise the activity.

The chosen analytic product, in addition to issuing alerts and reports, would also give a security professional (who might be outsourced) the ability to use analytics tools to perform detailed analysis of the captured log and audit data, leading to quicker and more effective issue identification.  

Read more on Artificial intelligence, automation and robotics

Data Center
Data Management