5 Security Lessons From Real-World Data Breaches

Aug 28, 2009 (08:08 PM EDT)

Read the Original Article at

The unwritten rule among companies is that the less said about security breaches, the better. For every public revelation of stolen data there are dozens of breaches that don't make the news.

This code of silence might avoid angering partners and customers, and sidestep a public relations mess, but it makes it harder for the industry as a whole to learn from mistakes and improve information security and risk management practices. That's why this article draws on direct observations from real-world security breaches on which we've performed forensic investigations, to help companies understand how breaches happen and what to do about them.

InformationWeek Reports

Neohapsis, the company we work for, has performed investigations on some of the largest thefts of sensitive data. After hundreds of cases, we can unequivocally state that attackers are more sophisticated than ever. They can adeptly exploit lax security controls and sloppy operational practices and are armed with weapons from common network management tools to custom malware. Information security tactics and technology also have advanced, but not at the same pace.

The good news is that there are reasonable, well-understood methods to mitigate many of the breaches we have seen; we just need to get these methods more widely implemented.

We'll start by describing three real-world breaches.

A company's Web sites often serve as a beachhead for attackers. In one investigation we performed at a financial services firm, the attackers exploited a vulnerability they found in a Web application on a public-facing Web server. The server didn't house any critical data and wasn't particularly important to the organization, and the exploit wasn't particularly impressive, either; the attackers found a SQL injection vulnerability and then used an "xp_cmdshell" function to pull down their tools to get a foothold onto the server. Because the organization didn't consider the server or the application particularly critical, there weren't many monitoring controls around them, and the exploit went unnoticed.

The attackers used the compromised server as their home base. They deployed tools and scanners and spent several months meticulously mapping the network without being detected. Once they found the systems that contained the data that they were looking for, they simply copied the information, put it into a Zip file, and moved it out.

The organization had standard antivirus and firewall technology, but the only reason it became aware of the attack was the real-world use of the stolen data; if not for that, the organization likely would have remained ignorant of the breach.

In another investigation we conducted, the attackers worked from the same playbook, compromising a Web-based e-commerce server at an online retailer. However, once the attackers made their way to the database systems to look for credit cards, they discovered the database with the credit card numbers was encrypted. Chalk one up for the good guys, right? Unfortunately, the decryption keys were stored on the same systems, so the attackers literally had the keys to the kingdom.

What Keeps Security Pros Up At Night?
Our 2009 security survey covers it all.
Finally, we've worked on several cases in which attackers gained entry through point-of-sale systems.

The point-of-sale system vendor's support team used common remote access applications such as VNC to gain access to the systems for support and troubleshooting. But the vendor used the same remote access password for every customer. The attackers knew the password and simply ran bulk scans for other systems matching a similar profile. The rest was easy.

We've abstracted five essentials lessons from these and other real-world intrusions: Get serious about Web application security; add layers of security controls; understand the limits of security technology; review third-party systems; and know that bad incident response is worse than no incident response.

1. Get Serious About Web Security

Web applications are often a jumping-off point for intruders. We continue to see IT teams that have kept systems patched and firewalls deployed but were blindsided by applications that had flaws that were trivial to exploit.

An organization's best defense is to integrate security into the application development life cycle. Building code with fewer security defects provides a greater return on security than slapping Band-Aids on live applications. Using Web application scanning technology such as IBM's AppScanner or Hewlett-Packard's WebInspect in the quality assurance or review process is critical. Companies that buy instead of build Web applications should review these apps or demand that vendors perform security assessments verified by a third party.

Web application firewalls serve as a secondary security control. These products are designed to spot known attacks and identify suspicious behavior that might indicate an intrusion attempt. However, they're only an aid. They don't address the root symptom of flawed development practices and vulnerable applications. A Web application firewall may buy you some time, but companies are foolish not to fix the root causes of the risk.

2. Add Secondary Controls

Secondary controls such as internal firewalls, encryption, or database monitoring software can tip off security personnel or thwart attacks when intruders bypass primary controls. Unfortunately, we rarely see effective implementation of secondary controls.

For instance, we see companies that have deployed an additional tier of firewalls inside the network to better isolate critical systems, a practice we strongly encourage. However, it's common to find internal firewalls with lax policy settings that simply allow all traffic, or with unwieldy rules that no one understands because of a lack of documentation. We've handled a few cases where internal firewalls would have thwarted the attack had they been properly configured.

Smart organizations will identify where they can use segmentation to better isolate sensitive or critical systems and data, and create secondary and tertiary control systems based on that segmentation. It's asking, "What would hurt us the most if it were compromised?" So, manufacturers might add security layers around systems that store product designs or control assembly lines. A utility may segment grid control systems. A payment processor or merchant should focus on the systems that process payments.

But don't stop at inserting those secondary controls and then leaving them alone. Beware of bonehead policies that ease operational overhead but utterly neutralize the control's value for reducing risk. Configure, document, and monitor these controls. Devote resources to examining the logs of control systems on a regular basis and watch for changes and anomalous activity. Done right, these additional controls can save you; done wrong, they complicate the environment while providing a false sense of safety.

3. Know Your Limits

The third lesson is to understand the limits of your security systems. We have antivirus, firewalls, network and host intrusion detection systems, authentication, PKI, VPNs, NAC, vulnerability scanners, data loss prevention tools, security information, event management platforms--and yet the breaches go on.

That's because controls aren't progressing as fast as the capabilities of attackers. We have worked several cases where systems with fully updated antivirus signatures failed to detect active Trojan horses, key loggers, and sniffers. Most signature development cycles still work from an outdated assumption that a successful piece of malware will be widespread, letting the vendor become aware of it and construct a signature. Also, attackers use packers to hide malware from virus scanners.

Vulnerability scanners also aren't keeping up with released vulnerabilities and can't currently do effective application scanning. Intrusion detection and prevention suffers many of the same shortcomings as antivirus products.

What's an IT team to do? For starters, place the appropriate level of trust on a technology--and no more. Don't expect your antivirus to spot custom malware. Use vulnerability scanners only as a secondary test to make sure your patch management system is working. Assume your firewall will block automated scans but that a skilled attacker will make it past the perimeter units.

Intrusion detection and prevention systems can be useful at times, but we often find router Netflow data and firewall "allow" logs provide a better view of where attackers went, and help in measuring the extent of a breach.

From an operational standpoint, consider deploying event management technology to get a picture of activity across multiple systems, or at least implement centralized log management to help search, review, and store logs.

Also consider the skill and motivation of your adversary and what controls might be necessary to detect their presence. Understanding their capabilities is becoming increasingly important.

4. Trust But Verify

The fourth lesson is simple but often forgotten: Review third-party systems. As our point-of-sale example shows, security due diligence should be performed by an internal team or a third-party application security firm. Don't miss low-hanging fruit such as changing default passwords.

5. Plan For Incidents

Finally, be aware that bad incident response can be worse than none at all. We often deal with organizations whose IT teams destroy evidence, intentionally or otherwise, by rebuilding systems, wiping drives, purging sections of databases, or giving third-party providers access to compromised systems. Finding the problem becomes tougher, and might destroy or compromise evidence that could be used in criminal prosecution.

Have basic procedures in place--even as basic as "do nothing until you check the incident response procedure." There's much free material, ranging from the NIST 800-61 guide to Visa's "If Compromised" guidelines. Both documents can easily be found with a Web search.

Security breaches are painful for companies and the IT pros involved but silence isn't always the best response. Pulling back the curtain on common mistakes helps businesses understand what they're up against.

Greg Shipley (, Tyler Allison, and Tom Wabiszczewicz work in Neohapsis' Chicago office.

Illustration by Jupiterimages