Penetration Tests Day Five: What vulnerabilities do Penetration Tests uncover?

In the last installment of our guide to Penetration Tests, Patrick Gray learns where testers find the most vulnerabilities.

TT: What are pen testers finding on networks these days and what are the top five mistakes organisations are making that lead to successful penetration?

Amit: What aren't we finding and seeing! Think of worse case scenarios and we've generally seen it and see it frequently. Default server builds, open administrative interfaces, little to no server hardening, application development issues that allow for compromise of data stored in backend database servers.

Organisations' security mistakes stem from weak coding practices, weak policies and standards and weak enforcement of good practices.

Technically, and this is just naming a few things, mistakes are as follow:

  • Trusting input received from a user on the Internet.
  • Bad security practices in storage of client and application data on servers.
  • Improper server hardening and patching.
  • Detailed error messages revealing valuable information to attackers.
  • Insecure software development practices.

Brian: The absolute number one mistake is not upgrading. Some high tech companies are still running software that is years out of date. Not installing patches, not upgrading when vulnerabilities published, and not re-evaluating configurations is inexcusable, but all too common.

Adam: We always come across low hanging fruit when assessing environments, but what we are finding in the more hardened environments are weaknesses introduced by "little hacks" here and there, which people have made to just make things work. Administrative interfaces are a common one. Many forget that shiny new Dell or HP hardware comes with a listening service that provides DRAC or another low-level control. We are also finding companies rushing out virtualised infrastructure, where the hosts within the virtual world are hardened, protected and to some point secure, but the host housing the VMs sucks.

Poor service-access filtering, little or no rate limiting and usually multiple paths into machines for maintenance are other problems. I got a host once that had VNC, RDP and Dameware all running. It was managed by three different groups within the company: Tech support, developers and database administrators.

We also see more and more silly mistakes like development environments being accidentally staged onto "hidden" directories on production environments, the age-old "security through obscurity".

Not protecting switching and routing infrastructure is a common one, too. They do a good job of protecting and hardening the hosts, but the switch or router connecting the environment can be compromised.

We also see newer technologies, like converged stuff (all in one router/firewall/ids/anti-spam/network-monitoring/switching etc) being put in insane spots.

And while a customer's servers might be resilient to attack, were seeing the administrator's PC as a "dumping ground", where the administrators share configuration files and the like. They can be totally ownable. We often get provided "targets" which are servers, applications or services, but often reach our goals through other means. The weakest link can often lead to the goldmine.

Read more on Data breach incident management and recovery

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close