We’ve all heard the old saying “Failure to plan is a plan for failure”. When I look back at 2014, there are two events that stand out to me as examples of this truism: the Heartbleed vulnerability and the Sony compromise. Very few people could have predicted a vulnerability the size of Heartbleed would happen, however just about anyone in security could have predicted that someone would be impacted by an attack of this scale in 2014. In fact, we’ve been predicting that a major compromise of one of the mega-corporations would happen for over a decade, so it shouldn’t be considered a surprise at all. Which brings me back to the point: we need to plan for these types of events or we’re going to fail.
The honest truth about Heartbleed is that more of us probably should have seen it coming. If not in the form of an OpenSSL vulnerability, at least the fact that something of this scope and impact. The Internet is an organic construct built of billions of lines of interacting code and we’re at a point in time that more people are looking at the old code, so it’s inevitable that something had to give. But we’d had events of similar scale over the years, like Code Red and Nimda, so it would stand to reason that most companies would have an incident response plan in place on how to deal with it. Except that many companies didn’t and almost no one had a plan in place that was well thought out enough or flexible enough to deal with Heartbleed and its consequences.
Some businesses have learned from Heartbleed and the other major vulnerabilities this year and taken the opportunity to modify their processes and procedures in order to deal with events of this magnitude. But I’m willing to bet those businesses are in the minority; most have probably discounted the impact of the events and continued as if nothing has changed. Response plans haven’t been updated, communication methods have been modified to reflect who really gets things done, management still doesn’t understand the scope and depth of an emergency incident and the customers are still left wondering if their service providers are protecting them from an incident that could have severe consequences. Basically, failure to learn from these incidents means the same mistakes will be made again and the stress on systems and people who are often already running on an fine edge will continue and grow worse. Proper planning can help alleviate these issues.
Then there’s the Sony compromise. This is a nightmare scenario for every business, the worst possible case that could possibly happen. The entire network is compromised, all intellectual property is potentially stolen and email is out there in the public eye. Who could have foreseen it? Actually, anyone could have, and should have planned accordingly. No one knows if it’s going to be Sony or IBM or Akamai or their own organization, but we all have to realize this is going to happen and have the plans in place to deal with it. We’ve been saying for years in security that it’s not a matter of if your’re going to be compromised but when and how long it takes you to notice.
We almost always sound like alarmists in security, but if you’re not using Sony’s pain as an example to show your CEO why you should have a contingency plan for when this happens to you, you’re missing out on an opportunity. More, your’re doing your own business a disservice in preparing them for the worst case scenario. You’ve probably designed your data centers with the worst case scenarios of fire, flood and earthquake, even if all of those are rare or unheard of events where it’s located, so it should make sense to have arrangements of the same scope for when disaster strikes your information and your infrastructure.