Jan 21, 2013 (03:01 AM EST)
4 Steps For Proactive Cybersecurity
Read the Original Article at InformationWeek
Download the entire Jan. 21, 2013, issue of InformationWeek, distributed in an all-digital format as part of our Green Initiative
We will plant a tree for each of the first 5,000 downloads.
In our dive into the theory behind offensive cybersecurity, Gadi Evron summarized the legal and ethical problems of fighting back against an attacker. There are also some purely tactical problems: How do you know you're not blasting some grandmother in Akron whose PC is a zombie? Are you prepared to come under the glare of organized criminals?
I share Evron's outlook that for most, if not all, nongovernmental entities it's too soon to go down the path of all-out, offensive security counterattacks. Many other security professionals agree, and you can get a good summary of the academic and government research on cyber espionage, cyber deterrence and cyber offense by reading a recent post by Dave Dittrich, a member of the HoneyNet Project: "No, Executing Offensive Actions Against Our Adversaries Really Does Have High Risk (Deal With It)."
But you can do a lot more than read and hope. Here are some ways to take action now that will at least let your team start taking a more offensive security mindset.
Step 1: Do active risk analysis to know what attackers may strike at, and how.
Intelligence gathering is an arduous task for even well-funded government agencies, so it is highly unlikely that your company can achieve the level of detail required for true cyber intelligence about attackers. Further complicating intelligence gathering is that private-sector chief information security officers don't share details of successful breaches, even though such collaboration would be critical to understanding and linking methods and attackers. But that's another article.
For now, focus your effort on the intelligence gathering you do control: knowledge of your own systems, networks and business.
Conventional cyber defense involves security engineers trying to figure out what attackers can do, how they might break in and what system holes could be exploited. But this is where IT could learn from traditional engineering disciplines, which take a more proactive approach. For example, mechanical engineers are taught to approach problems using failure analysis. This technique involves identifying the conditions where a failure can occur instead of trying to figure out what failures can occur. Think of an explosion caused by an oily rag. Without oxygen, oil, the rag and fire that ignites everything, an explosion won't happen. Yet most security engineers trying to keep their networks from being blown wide open look for flames via log data (the attack) rather than finding the oxygen, oily rags and sparks -- what must be present for an explosion.
Your intelligence gathering needs to focus on identifying hazardous conditions. You will then learn each condition also has a subset of conditions, and this chain continues until you have an addressable condition. For example, instead of trying to detect or prevent a zero-day exploit from installing malware on a machine, ensure that the conditions for a breach are not present. Eliminate easily guessed passwords, weak permissions on files and folders, and administrative permissions, all which are under your control, instead of trying to figure out where and how any given piece of malware, which you don't control, might strike.
This approach requires that your security team know how attackers accomplish their mischief once inside, and that means spending time learning how exploits, penetration testing and underlying applications work. This isn't easy, but it's why mechanical engineers spend years being trained about potential conditions.
While there are several failure-analysis methods, including Alex Hutton's Risk Fish, discussed recently in Dark Reading, here's how we recommend you go about it:
Download the Jan. 21, 2013 issue of InformationWeek