Who is responsible for determining who can access sensitive information? Is it the role of the database or system administrator, or the data owners from lines of business (LOBs)? Maybe the permission oversight varies when data content includes sensitive information. Should your privileged users and admins have actual access to the content? If so, how much control to you have over preventing bad behavior?
Fighting Alert Fatigue
Organizations typically rely on volumes of logs to forensically identify who accessed what data at what time and assess whether the access was appropriate or constituted a policy violation. Administrators may consider flowing the database or data access logs to the organization’s security information and event management (SIEM) solution to correlate and assist in determining policy violations. The problem is that large volumes of logs collected and evaluated by the SIEM cause significant overhead and performance degradation, and require extensive human oversight to achieve. Analysts tasked with quickly reviewing these massive logs tend to become desensitized since many alerts end up being false positives or otherwise irrelevant. Unfortunately, this means real risks are often overlooked.
An effective approach to this challenge is to front-end the information landscape — including databases, mainframe data and files — and move the analysis overhead away from the critical systems. A database, for example, is considered structured data since contents are stored in structured tables, columns and rows. When calls to the data are evaluated off of the critical systems themselves, there is an opportunity for real-time evaluation based on appropriate permissions to block, redact and mask content before disseminating it.
It’s also possible to leverage out-of-the box governance frameworks. Data privacy requires knowledge of who is accessing data, when, whether it’s appropriate and whether sensitive information was accessed. Many governance controls also determine the number of failed logins and whether these attempts are eventually successful.
Controlling Access to Sensitive Information in Real Time
By conducting this monitoring seamlessly outside of the actual database server or system, security teams can eliminate the overhead and let the databases, data repositories and SIEM tools to do what they do best. In fact, these systems can synchronously scan and monitor the entire IT landscape and categorize information according to policies. These methods easily facilitate outgoing data according to controls and may even terminate connections that attempt to violate policies.
Best of all, this is relatively easy to incorporate, given the right tools. Solutions that include comprehensive out of-the-box governance models are already equipped to look in the right areas, and groups of users with varying levels of access permissions can be imported from the actual databases groups or from external files and data structures. These groups can then be quickly aligned with the controlled data classifications and granted appropriate access and permissions. As for unstructured data on these servers, advanced data security solutions can perform the same monitoring and provide real-time controls to protect sensitive information.
The bottom line is that it’s crucial to understand where the organization stores its data, who is accessing it and whether that access aligns with established security policies. Without this visibility, threats are bound to slip past the weary eyes of overworked security analysts, and sensitive data is bound to slip into the wrong hands.
Listen to the podcast: Data Risk Management in 2018 — What to Look for and How to Prepare
Cyber Security Advisor, IBM
IBM Cyber Security Advisor and recognized subject matter expert in Identity Access Governance, Access and Authorization architectures, and Security Intellige...