Back in my DoD days, I administered an AT&T 3B2 UNIX box whose raison d’etre was to provide multi-level security (MLS) for data ranging from unclassified to top secret. In that environment every Post-IT note, interoffice memo, email, or WordPerfect doc had to be labeled with a classification level and a compartment name, and every one of us had a clearance level. You could only read data that was at or below your classification level, and only if you had clearance for that compartment. And you could only write data to your level or above (you can’t demote the classification of data, which may let someone with a lower clearance read above their level), and again, only within the same compartment.
Fast forward to present time, and it occurs to me the private sector is finally getting wise to the concept of protecting the data, not the environment. The network perimeter has been eroding for years now, and architecture extensions are being cobbled together from VPNs, private B2B networks, and SSL gateways. But the promise—or threat—of cloud computing is challenging the notion that we can protect systems and forcing organizations to accept the fact that they’ll have to protect the data, wherever it may roam.
[mobile-mailto]
But not all data is created equal, and some should never be allowed to roam in the first place. More and more this common sense (hopefully) concept is being codified through data protection and privacy laws and contracts. PCI DSS mandates protection of cardholder data, HIPAA prescribes protection of patient data, and the Massachusetts privacy law, 201 CMR 17.00, enacts the protection of personal information for residents of the Commonwealth. If you’re subject to any of these—or other data protection or privacy controls—you could take the minimalist approach and simply define two types of data, and protect only the sensitive information. That works if you only feel an obligation to satisfy the law or contract; but if you take seriously the responsibly of protecting the information owned by your company or organization, this is the perfect time to undertake a data classification effort.
Data classification is at the heart of security: it’s the foundation of assessing your risk. I don’t necessarily mean a full-blown risk assessment, employing a comprehensive qualitative methodology; in the most basic sense, you just want to find out where you sensitive information is housed and how it flows. In mandatory access control terminology, the data is the object, and people are the subjects, and your next step is to decide who should have access to the data at different levels of classification. Again, you could take a minimalist approach and brand every employee with a check mark or a big red X, but you know the right answer is role-based access control (RBAC), which is pretty much universally mandated in security frameworks anyway, so you might as well put in a little extra effort and put RBAC in place now, before a government agency or business partner forces you to down the line.
When I look back at security clearances and data labels, it sure starts to look like where we’re heading today. Was it forward thinking? Or are security concepts like fashion: it’s basically all the same pieces in the wardrobe, and the fashions come around every so often. Whichever, the writing’s on the wall: you’re going to have to undergo a data classification effort sooner or later and start monitoring the access and flow of sensitive information. You may not need a military intelligence grade taxonomy, but I’m urging you to take the time and do it right: spend the extra effort to actually improve your information security program and controls. It will save you time and money in the long run.
Research Strategist, X-Force R&D, IBM