May 4, 2015 By Jaikumar Vijayan 2 min read

A new survey by MeriTalk reveals many government agencies may have an overly optimistic estimate about the length of time cyberthreats remain undetected on government networks.

Optimistic Estimates

MeriTalk, a public-private partnership focused on improving government IT security, surveyed a total of 302 cybersecurity professionals from federal, state and local government agencies to get an idea of their current state of cybersecurity preparedness. Conducted in March, the study found government IT security professionals estimate that cyberthreats, including intrusions, existed on their networks for an average of just 16 days before they were detected.

That number is substantially lower than the numbers being reported by the government entities that actually suffered a recent data breach.

The Reality

Big data vendor Splunk, which underwrote the MeriTalk study, points to last year’s breach at government security clearance contractor USIS as one example. In that incident, personal records belonging to an estimated 25,000 employees at the U.S. Department of Homeland Security were exposed, but the contractor did not know about the intrusion for months.

Security vendor Mandiant, which has performed forensic investigations into numerous data breaches over the past few years, estimated in a report last year that the median number of days threat actors are able to remain undetected on a victim’s network is 229 days. The longest anyone has been able to remain undetected on a victim’s network is an astounding 2,287 days.

“There are a number of reports focused more broadly on commercial and public-sector organizations, suggesting that attackers are present on victim networks for an average of over 200 days before they were discovered,” a Splunk representative said in an email.

Lack of Visibility Into Government Networks

Against that background, the MeriTalk survey results seem startling.

“This shows that most public-sector agencies are far more optimistic than the reality,” according to Splunk.

Respondents in the MeriTalk survey reported collecting more threat-related data than ever before from sources such as vulnerability scans, mail logs, virtual private network logs and Dynamic Host Configuration Protocol logs. However, many are struggling to make sense of the data deluge, the report also showed.

Nearly 7 in 10 government cybersecurity professionals reported being overwhelmed by the volume of data being collected by the security systems. Some 78 percent said at least some of the data they collect goes unanalyzed because they simply had neither the time nor the resources to do it.

Ignoring Alerts

This statistic is important. Organizations have deployed numerous security controls over the years, many of which are set up to deliver alerts on network intrusions and other malicious threats. However, such alerts are often ignored because of both the sheer volume of data generated by the systems and the lack of resources to inspect the data. For instance, with the Target breach, the company admitted one of its security alerting systems warned of an intrusion. However, the alerts were never viewed or acted upon and were only discovered after the breach.

The survey found 70 percent of all government agencies can conduct a root-cause analysis into a security incident to find out what might have caused it. At the same time, the root-cause analysis was successful only 49 percent of the time. Nearly 90 percent of the cybersecurity professionals surveyed said they are unable to tell a complete story with the security data they gather, according to Splunk.

“These findings validate the fact that most are not using a single platform to address their needs,” the company said. “Data is everywhere. It’s disconnected, siloed.”

More from

AI cybersecurity solutions detect ransomware in under 60 seconds

2 min read - Worried about ransomware? If so, it’s not surprising. According to the World Economic Forum, for large cyber losses (€1 million+), the number of cases in which data is exfiltrated is increasing, doubling from 40% in 2019 to almost 80% in 2022. And more recent activity is tracking even higher.Meanwhile, other dangers are appearing on the horizon. For example, the 2024 IBM X-Force Threat Intelligence Index states that threat group investment is increasingly focused on generative AI attack tools.Criminals have been…

The major hardware flaw in Apple M-series chips

3 min read - The “need for speed” is having a negative impact on many Mac users right now. The Apple M-series chips, which are designed to deliver more consistent and faster performance than the Intel processors used in the past, have a vulnerability that can expose cryptographic keys, leading an attacker to reveal encrypted data. This critical security flaw, known as GoFetch, exploits a vulnerability found in the M-chips data memory-dependent prefetcher (DMP). DMP’s benefits and vulnerabilities DMP predicts memory addresses that the…

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today