July 11, 2016 By Douglas Bonderud 2 min read

Corporations aren’t known for sharing. With so many employees, partners, providers and customers to manage, there’s always a chance for data compromise — so why risk it by sending more information around? And thanks to the rise of wearables, always-connected devices and the industrial IoT, these risks are growing.

The bigger problem? Malicious actors have no trouble swapping stories of compromise and successful attacks, putting the onus on companies to embrace security collaboration if they want to keep their networks safe. How do businesses trump the trust issue?

Is Security Collaboration Counterintuitive?

As noted by CIO, security firm Carbon Black is now “opening a line of communication” between companies with its new platform, the Detection eXchange. The idea here is to go beyond surface information such as virus signatures or IP addresses to share actual data about attack patterns and threat vectors. After all, it’s nothing for attackers to swap out a flagged IP address, but if they find typical attack patterns blocked at every turn, they’ll be left scrambling to change their ways.

Of course, security-savvy IT pros have raised a valid concern: If the goal of security firms is to protect key data, does it really make sense to share critical information? In the case of Carbon Black, for example, the government ultimately acts as a clearinghouse for shared data. It’s not a stretch to imagine this repository as a high-value target for cybercriminals, and once they have the inside track on how companies plan to deal with emerging threats, they can simply change tactics.

So while security collaboration sounds great, many companies balk at the idea of actually participating or share only the bare minimum required to ensure their own critical processes can’t be compromised.

Building a Better Mousetrap

The calls for national and global threat sharing frameworks are getting louder: As noted by SC Magazine, a recent cybercrime report from the U.K.’s National Crime Agency (NCA) argued that greater threat sharing is essential now that digital crime has outpaced traditional lawbreaking in the country. Additionally, TechCrunch made the case for a worldwide cyberthreat sharing program to help combat adaptive attackers.

Already, the Cybersecurity Information Sharing Act of 2015 (CISA) makes it possible for companies to share security information with the Department of Homeland security without facing legal ramifications for reporting data breaches in good faith. According to Dark Reading, however, any type of threat sharing framework is effectively a gamble since cybercriminal access to threat feeds negates any positive impact.

The piece does offer a few suggestions, however. For example, machine-to-machine-only threat feeds integrated with SIEM tools could be an option, along with completely anonymous reporting and the elimination of opt-in programs. Since corporations understandably value their privacy and freedom of action, this may be a case where anonymous, mandated reporting outweighs the benefit of opting to stay silent.

Companies are right to be wary of large-scale security collaboration initiatives. What if attackers grab control of this emerging threat playbook and use it to run an entirely new game? But hunkering down behind supposedly secure digital walls does nothing to improve the outcome. Trumping the trust issue is a rough ride but — win or lose — a unified security front gives companies a fighting chance.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today