Mar 28, 2008 (08:03 PM EDT)
Risk Management: Do It Now, Do It Right
Read the Original Article at InformationWeek
Astrophysicists and information security officers have something in common: The universes they monitor are expanding at an inexorable pace, and turning back time is not an option. We're being bombarded with competing demands around regulatory compliance and the next big thing in security, while the breaches we combat are having a larger impact. Our adversaries have gone from hobbyists to organized criminals, disclosure and privacy laws continue to be passed, the cost to clean up after attacks is rising, and reactive information security has proved ineffective. The stakes are a lot higher on all fronts, and the time for major change is clearly upon us.
It doesn't take a rocket scientist to realize that, in a resource-strapped world, prioritization is the critical component to setting an IT security agenda. Define the organization's most critical systems and data sets. Assess the risks associated with these assets. Decide which risks are acceptable, which are mitigable, and which can be transferred. Build a plan, and allocate resources appropriately.
If only it were that easy.
There's no one-size technology, process or approach to security. But after analyzing successes and failures and talking to industry leaders,
one trend stands out: Organizations are shifting from yesterday's binary, yes/no, good/bad information security thinking to a pragmatic approach of weighing risks and acting accordingly.
We must ensure a risk management approach is integrated into all processes, remain diligent about project selection, move beyond just firefighting, and get smarter with technology investments. For some organizations, this will require a wholesale transformation. Consider these critical factors when making the leap.
NEW WAY OF THINKING
So what's the unifying thread? Maturity. Glitzy hacking trend reports and fear-based proposals don't cut it with most of the C-level execs we work with. Without a common language to communicate risks (read: money), most security concerns go unheard.
But slinging the risk management mantra and actively managing risk aren't the same thing. The process and science behind the concept are critical. Areas of risk management vary in maturity, from Lloyd's of London and the domestic insurance industry to evolving IT risk frameworks such as ANZ 4360, NIST 800-30, and Factor Analysis of Information Risk (FAIR). Still, regardless of the depth and background of your understanding or the likelihood that you'll adopt a formal risk management framework in the IT environment, some concepts and necessary adjustments are critical.
For starters, when communicating risk, it's important to understand the audience and scope. "I learned the hard way that loosely throwing around risk terms when it came to IT projects in an insurance company was a bad practice," says Mike Murray, an information security practitioner in the financial services industry. "When the audience is used to looking at actuarial tables, you're going to look pretty stupid, pretty quickly, outside of the IT ranks if you're not careful."
The terms "vulnerability" and "threat" also are critical to the process, and they're often confused. Loosely defined, a vulnerability is a state or defect of an asset that could be exploited to create loss or harm; a threat is an entity or action that can cause loss or harm. Going into greater detail on the use of these terms in the IT and security contexts probably warrants an article all to itself, but suffice it to say that using language properly and consistently is essential when talking about risk. For a comprehensive discussion of IT risk terminology, check out FAIR's primer.
WHAT CAN HAPPEN
Identifying assets, vulnerabilities, and threats leads us to the infamous probability, or "likelihood," stage. Unfortunately, this is another area where IT risk management efforts need improvement compared with other industries because of, among other factors, subjectivity and a lack of historical data. A more mature approach would be to replace "likelihood," a qualitative guess in most cases, with "frequency," a quantitative and usually data-backed estimate. As an adolescent community in a subset of the area of risk management, IT obviously can't expect to have actuarial data for all our security challenges soon, if ever. Hopefully, frameworks like FAIR and the use of Bayesian technology can help fill some of these gaps, but even if we start with internal efforts to identify critical metrics, events, and losses, we'll be making progress.
Risk management basics aside, here's where the real leap comes in: learning to let more things go. When talking to information security traditionalists, the thought that comes right after "identified risk" is usually "mitigate," with "risk transference" a rare consideration and "risk acceptance" often hotly contested. To be fair, most transference mechanisms are done outside the technical realm through legal and insurance mechanisms, but it's a path that must be explored.
Acceptance, on the other hand, is something we certainly understand and perform frequently, albeit usually only in areas perceived as low risk. But should acceptance occur more frequently? Do we accept risk enough, and in the right areas?
Put into the context of tactical security initiatives, does preventing worm and virus outbreaks outweigh the need for encrypting personally identifiable information? Does detection of network-based intrusions reduce risk more than ensuring that all laptops have full disk encryption? Do we know which systems are the most critical and which are more disposable, and have we carried that into practice? Where does user awareness training fall into the mix, and have we balanced efforts in stopping stupid vs. stopping evil?
If given the choice between protecting everything poorly versus protecting a few assets well, which should IT choose?
History suggests we've preferred the former; the latter might prove a better path moving forward. Without an effective way to measure and communicate risks, we can't hope to have an effective conversation on the topic, much less make educated decisions. Gaining an understanding of risk terms, using them consistently in communications, developing a formal process to identify and quantify risks, and relating risks to assets in terms that make sense to both business and IT folks will give IT teams a better chance of prioritizing in ways that meet the needs of all levels of the organization.
BE SMART ABOUT PROJECT SELECTION ...
Of course, reacting to real-world events isn't in itself a bad idea, and neither is business involvement--in fact, that should be encouraged. But beware of letting your security agenda be ruled by the tyranny of post-incident fear. Two notable risks are an imbalance between tactical and strategic efforts and the pain of consistently being a few steps behind evolving threats.
How do we break this cycle? Integrating more risk-management-centric approaches can certainly help weed out anomalous scenarios and bring a rational, methodical, and defensible process to decision making. Another tool is to make sure there's a plan to match all tactical firefighting exercises with a strategic counterpart. For example, mainstream operating system and service vulnerabilities--frequently the root cause of worms, defacements, and the occasional targeted data theft--can be addressed by the modern vulnerability management process, yet application security issues are less well understood. Many organizations have launched application security initiatives only in the past 12 to 24 months, and most are still in their infancy. Typical efforts include the use of Web app scanners; application "penetration tests" ... arguably a misnomer, but an effort nonetheless; code reviews performed by specialized consulting firms; and investments in Band-Aid technologies such as "application firewalls."
While these efforts are steps in the right direction, without strategic counterparts such as developer training, integrating security into the software development life cycle, and holding software vendors accountable for security flaws via contract clauses, security teams continue to treat the symptom while ignoring the cause.
The evolution of application security is just one example, albeit a timely one. Risk-aware organizations will take steps to ensure that all projects receive both tactical and strategic security investments.
(click image for larger view)
... AND TECH DECISIONS, TOO
Technology certainly plays a central role in IT security, but unfortunately as a community we've gotten a bit lost in the process.
"There is an awful lot of lazy thinking in IT security. We even have a whole doctrine to prove it: 'Throw more tech at it,'" says Craig Balding, a technical security team lead at a global Fortune 500 company. "We need to get a lot more imaginative and apply critical thinking to problem solving rather than a product or product group mentality to everything."
Looking back, it's hard to believe our heads weren't in the sand on many levels when it came to technology selection. As a brief recap, during the early days of mainframes we placed a great amount of faith in user names and passwords as adequate access control mechanisms. Strangely enough, we made the same assumptions when IPX and IP-based networks and client-server computing took hold.
We all learned a few hard lessons--including that user names and passwords wouldn't deliver us from all evil. This led to adoption of firewalls as the new access control savior. Once again we put our faith in a technology, and once again we were let down. We then spent some time in denial about operating system vulnerabilities. Enterprise IT teams and vendors alike ignored the obvious until worms, spyware, and stock OS exploitation made the issue unavoidable. Huge investments in vulnerability scanning and patch management ensued.
The journey continues. We invested hundreds of millions of dollars in intrusion-detection systems without a solid understanding of their relative effectiveness and total cost of ownership. The IDS craze led to reinvestments in intrusion-prevention systems that even today are only partially enabled, and PKI is still a bad word in many IT circles. There's no shortage of disappointments on other product fronts. Host-based IPS rollouts were painful. Everyone seems frustrated with the lack of antivirus innovation. Security event information managers are evolving but expensive, and IPS products and "endpoint security solutions" rarely live up to the hype.
Our favorite comment from infosec pros we talked to for this article? "Our vulnerability management system worked great for six months, then it flushed itself down the crapper."
Should we pack it in and declare that all security technology stinks? No, and as a community we have learned from our failures: User names and passwords are still used, but only the foolish rely on them as a sole control mechanism. Patching/updating processes are now built into all operating systems, and even ignoring the network access control hype, stock networking devices are growing more security-capable. And security in the commercial software quality-assurance process has improved, if only within a select few vendors.
Moving forward, we must continue to learn from our mistakes and adopt innovative strategies. For starters, keep an eye on the consolidation of product sets. As security functionality becomes a differentiation point for mainstream IT products, the question "Is this a product, or is it a feature?" should be consistently raised. Take full disk encryption, or FDE. With a dizzying number of data disclosures resulting from lost or stolen laptops, it's no wonder FDE efforts have been in full swing. While most organizations have invested in standalone FDE suites, options are starting to appear in mainstream IT products. Two examples: A number of Lenovo ThinkPad models now ship with an option that embeds FDE using the crypto-enabled Seagate Momentus hard drives, and an FDE option known as BitLocker is available in select versions of Windows Vista. Given this consolidation, smart organizations will press their suppliers for insight into what they have planned in terms of baking security functionality into infrastructure devices and end-user systems.
Evolution Of The CISOAs companies move toward strong risk management, the chief information security officer's authority and oversight role increase and hands-on tech responsibility shrinks
There's nothing inherently wrong with any of these technologies, but if you're not asking these questions you're likely to fall into the traps that have snared IT thus far. Looking ahead, all organizations must adopt more formal risk management processes. In fact, the role of a chief risk officer, or CRO, is already taking shape in more risk-aware organizations.
Other open questions: Which parts of information security might move under a CRO, and which parts will stay in IT? Are disaster recovery/business continuity and information security more closely related than we've previously treated them? We've started to see movement on these fronts, but the jury is still out on what will take hold, and when.
One thing is for sure: IT professionals will either evolve to become better risk managers, or someone else will step in and do it for us.
Greg Shipley is CTO of Neohapsis, an IT security and information risk management firm, and an InformationWeek contributor. Contact him at firstname.lastname@example.org.
Albert Einstein photo: US Library of Congress, photo illustration by Viktor Koen
CISOs Challenge The Conventional