Hardly a week goes by without headlines detailing a data security breach. In 2014, the total number of records breached has exceeded 1 billion — a 25 percent increase over the previous year, according to IBM research. Database servers continue to be the primary source of breached records. This comes as no surprise because databases are typically where organizations store sensitive data such as credit card numbers, Social Security numbers and other valuables of interest to data thieves. In a study, the Ponemon Institute estimated the cost of a data breach in the U.S. at $5.85 million, or $201 per breached record.
So what are some of the key issues behind data breaches? Attackers are bypassing traditional perimeter security by exploiting vulnerabilities in applications or by leveraging stolen administrative credentials for the back-end database. Additionally, misconfiguration such as poor authentication settings or dormant entitlements are leaving database servers vulnerable to attacks.
Clearly, a comprehensive and proactive approach to data protection is required to allow organizations to identify risks and automatically adjust their data defenses before any serious damage can occur. In this blog, we will review the core principles for this approach and describe an example implementation.
Principles for Security and Data Protection
The following are the core principles for a comprehensive and proactive approach for data protection.
Discovery and Classification
Clearly, you can’t protect the data if you don’t know where it resides in the first place. Discovery would first need to identify the data repositories. These can be structured data repositories such as relational databases or unstructured repositories such as big data Hadoop platforms.
The next step in discovery would be to identify sensitive data within a given repository so that it can be classified and adequate policies put in place to protect it.
Vulnerability Assessment
A misconfigured data repository is vulnerable to attacks. For example, a poor database authentication setting where login credentials are exchanged in cleartext is a security risk since those credentials can be intercepted by an attacker. A vulnerability assessment would compare the data repository configuration settings against industry best practices such as the STIG or CIS benchmarks, and it would alert the administrator about any misconfiguration issues and how to fix them.
Encryption
Data stored or communicated in cleartext is also vulnerable to attacks. For example, storing sensitive data in cleartext creates a security risk should the physical media on which that data is stored be stolen or lost. Encryption with strong key management renders the data useless to attackers without the encryption keys.
Authentication and Authorization
Users must be authenticated and authorized before they are allowed access to data. Users must not be given more privileges than those they need to accomplish their jobs. This is called the principle of least privilege.
Activity Monitoring, Correlation and Analytics
Continuously monitoring database activities and correlating such activities with other activities at the network, infrastructure and application layers is critical to identifying real-time threats and automatically adjusting the database protection policies. For example, a regular user accessing an unusual set of tables at an unusual time of day from an unusual IP address might be an indication of stolen credentials. It therefore warrants a real-time adjustment to the database protection policy, such as increasing the auditing level, alerting the administrator or blocking the access.
Auditing
Keeping an audit trail is critical to hold users accountable for their actions and to meet the requirements of numerous compliance mandates.
Implementing a Comprehensive and Proactive Data Protection Approach
By combining industry-leading security intelligence capabilities with robust data protection capabilities, organizations can have an integrated approach for implementing a comprehensive and proactive data protection solution.
The integration between QRadar Security Intelligence and Security Guardium is fully bidirectional. For example, it is possible to configure Guardium such that database alerts are automatically sent to QRadar. Through correlation with events from other sources and deep analytics, the security intelligence platform is able to detect threats that may be missed if only inspecting each source alone.
Similarly, it is possible to configure the security tool such that specific events of interest trigger an automatic update of a data protection policy. For instance, you can set up the configuration in such a way that when QRadar detects that a particular machine has been compromised, Guardium is automatically updated to block access to the database from that machine.
To Learn More
For further details, read the Guardium and QRadar integration paper, and listen to the Guardium Tech Talk from Sept. 8.
Also, if you are attending the IBM Insight 2015 conference in Las Vegas, I encourage you to attend session ISP-1922 on Thursday, Oct. 29, “Bridging the Gap Between Security Intelligence and Data Protection.”
CTO for Data Security, IBM