October 19, 2017 By Douglas Bonderud 2 min read

Insider threats are a huge problem for organizations. As noted by IT Pro, 74 percent of all cyber incidents happen “from within companies,” with 42 percent of threats coming directly from employees.

Specific industries, such as health care, have seen a marked uptick as more sensitive patient data is stored digitally — some insiders accidentally share this information, while others abuse it to open fake credit card accounts, according to Healthcare Informatics. Meanwhile, network security for business now faces a new challenge: snooping by security professionals and C-suite executives.

Sneaking a Peek

According to a recent report by Dimensional Research and One Identity, 92 percent of security professionals reported that employees access information that isn’t relevant to their day-to-day work, while 23 percent said it happens “frequently.”

Two-thirds of security professionals also admitted to accessing data they don’t need, while 71 percent of C-suite members admitted to snooping. Meanwhile, 36 percent of survey respondents said they specifically hunted down or accessed company performance information that wasn’t necessary for their work.

In most cases, curiosity, rather than criminal intent, drives these decisions. Security professionals might see irrelevant data as something that could provide a security advantage later on, while C-suite members can often justify these actions as necessary for complete corporate oversight.

The problem is that ease of access encourages repeated use, and the best intentions carry no weight if sensitive data is compromised or accidentally shared. As a result of those leaks, companies could face financial penalties from compliance agencies, backlash from consumers and long-term audits that call into question current data access and permission policies.

Stopping the Snoop

So how can enterprises scale back snooping and ensure that security professionals are following the rules like everyone else? It starts with information and identity governance policies. Who has access to what, and when? Employees are often restricted to their scope of influence, while security professionals and C-suite executives tend to push back against these controls, citing a need for big-picture knowledge.

There’s a business case here for lowering access rights: The fewer people that have access to critical information, the easier it becomes to track chain of custody and data alteration. While admins and managers might chafe a bit at digital roadblocks, this immediately improves network security for business by limiting potential avenues of compromise.

Of course, some users simply won’t follow new rules. GovTechWorks noted that advances in machine learning may pave the way for better detection. That’s because detecting insider threats isn’t “about a specific vulnerability being exploited or a specific malware that has a signature associated with it.” Instead, it’s about detecting and analyzing a pattern of behavior: Why are users accessing certain data? Under what circumstances? Do they have the proper permissions? If so, how old are these permissions?

Automated, intelligent security tools could help detect potential snoops and then notify IT. The trick here is to make sure that the snoopers themselves aren’t the ones getting these reports.

Improving Network Security for Business

Digital snooping is inevitable. Most users do so out of curiosity, not malicious intent, but intentions and consequences aren’t always linked. Even well-meaning security employees or C-suite members could prompt a data breach or system compromise. The best bet is to boost network security for business by scaling back permissions and leveraging behavior-based security tools to sniff out staff members.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today