November 5, 2018 By David Bisson 2 min read

A new report revealed that nearly one-third of cyber incidents reported in Q3 2018 were classified as “destructive attacks,” putting election security at risk in the lead-up to the 2018 midterms.

In its “Quarterly Incident Response Threat Report” for November 2018, Carbon Black found that 32 percent of election-season cyberattacks were destructive in nature — that is, “attacks that are tailored to specific targets, cause system outages and destroy data in ways designed to paralyze an organization’s operations.” These attacks targeted a wide range of industries, most notably financial services (78 percent) and healthcare (59 percent).

In addition, the report revealed that roughly half of cyberattacks now leverage island hopping, a technique that threatens not noly the target company, but its customers and partners as well. Thirty percent of survey respondents reported seeing victims’ websites converted into watering holes.

Time to Panic About Election Security? Not So Fast

Despite these alarming statistics and the very real risks they signify, Cris Thomas (aka Space Rogue) of IBM X-Force Red told TechRepublic that since voting machines are not connected to the internet, a malicious actor would need physical access to compromise one. This could prove challenging for attackers, who must understand not only the vulnerabilities in each individual voting machine, but also each precinct’s policies.

Bad actors could theoretically stage an attack by obtaining an official voting machine before the election and gaining physical access to it on voting day, but these machines come with checks and balances that detect when votes are changed, decreasing the liklihood of a successful attack.

Attacks Are Growing Increasingly Evasive — and Expensive

Still, the rise in destructive attacks is particularly concerning given that, as reported by Carbon Black, attacks across the board are becoming more difficult to detect. In addition, 51 percent of cases involved counter-incident response techniques, and nearly three-quarters of participants specifically witnessed the destruction of logs during these incidents. Meanwhile, 41 percent observed attackers circumventing network-based protections.

These evasive tactics could prove costly for companies. According to Accenture, threat actors could set companies back as much as $2.4 million with a single malware incident, with cybercrime costing each organization an average of $11.7 million per year.

How to Defend Against Destructive Attacks

Security professionals can defend their organizations against destructive attacks by developing a dedicated framework to predict what steps an adversary might take once inside the network. Security teams should supplement this framework with AI tools that can use pattern recognition and behavior analysis to stay one step ahead of cyberthreats.

Sources: Carbon Black, Accenture, TechRepublic

More from

Generative AI security requires a solid framework

4 min read - How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.CISA Director Jen…

Q&A with Valentina Palmiotti, aka chompie

4 min read - The Pwn2Own computer hacking contest has been around since 2007, and during that time, there has never been a female to score a full win — until now.This milestone was reached at Pwn2Own 2024 in Vancouver, where two women, Valentina Palmiotti and Emma Kirkpatrick, each secured full wins by exploiting kernel vulnerabilities in Microsoft Windows 11. Prior to this year, only Amy Burnett and Alisa Esage had competed in the contest's 17-year history, with Esage achieving a partial win in…

Self-replicating Morris II worm targets AI email assistants

4 min read - The proliferation of generative artificial intelligence (gen AI) email assistants such as OpenAI’s GPT-3 and Google’s Smart Compose has revolutionized communication workflows. Unfortunately, it has also introduced novel attack vectors for cyber criminals. Leveraging recent advancements in AI and natural language processing, malicious actors can exploit vulnerabilities in gen AI systems to orchestrate sophisticated cyberattacks with far-reaching consequences. Recent studies have uncovered the insidious capabilities of self-replicating malware, exemplified by the “Morris II” strain created by researchers. How the Morris…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today