The rise of artificial intelligence (AI), large language models (LLM) and IoT solutions has created a new security landscape. From generative AI tools that can be taught to create malicious code to the exploitation of connected devices as a way for attackers to move laterally across networks, enterprise IT teams find themselves constantly running to catch up. According to the Google Cloud Cybersecurity Forecast 2024 report, companies should anticipate a surge in attacks powered by generative AI tools and LLMs as these technologies become more widely available.

The result is a hard truth for network protectors: keeping pace isn’t possible. While attackers benefit from a scattershot approach that uses anything and everything to compromise business networks, companies are better served staying on the security straight and narrow. This creates an imbalance. Even as malicious actors push the envelope, defenders must stay the course.

But it’s not all bad news. With a back-to-basics approach, enterprises can reduce risks, mitigate impacts and develop improved threat intelligence. Here’s how.

What’s new is old again

Attack vectors are evolving. For example, connected IoT environments create new openings for malicious actors: if they can infiltrate a single device, they may be able to gain unfettered network access. As noted by ZDNET, meanwhile, LLMs are now being used to improve phishing campaigns by removing grammatical errors and adding cultural context, while generative AI solutions create legitimate-looking content, such as invoices or email directives that prompt action from business users.

For enterprises, this makes it easy to miss the forest for the trees. Legitimate concerns over the rise of AI threats and the expansion of IoT risk can create a kind of hyperfocus for security teams, one that leaves networks unintentionally vulnerable.

While there might be more attack paths, these paths ultimately lead to the same places: enterprise applications, networks and databases. Consider some predicted cybersecurity trends for 2024, which include AI-crafted phishing emails, “doppelganger” users and convincing deepfakes.

Despite the differences in approach, these new attacks still have familiar targets. As a result, businesses are best served by getting back to basics.

Focus on what matters

Value for attackers comes from stealing information, compromising operations or holding data hostage.

This creates a funnel effect. At the top are attack vectors, everything from AI to scam calls to vulnerability exploits to macro malware. As attacks move toward the network, the funnel begins to narrow. While multiple compromise pathways exist — such as public clouds, user devices and Internet-facing applications — they are far less numerous than their attack vector counterparts.

At the bottom of the funnel is protected data. This data might exist in on-site or off-site storage databases, in public clouds or within applications, but again, it represents a shrinking of the overall attack funnel. As a result, businesses aren’t required to meet every new attack toe-to-toe. Instead, security teams should focus on the shared end goal of disparate attack vectors: data.

Effectively addressing new attack vectors means prioritizing familiar operations such as identifying critical data, tracking indicators of attack (IoAs) and adopting zero trust models.

Accelerate security defenses with AI

Back to basics

Consider an enterprise under threat from an AI-assisted attack. Using generative tools and LLMs, hackers have created code that’s hard to spot and designed to target specific data sets. At first glance, this scenario can seem overwhelming: How can companies hope to combat threats they can’t predict?

Simple: Start with the basics.

First, identify key data. Given the sheer amount of information now generated and collected by enterprises, it’s impossible to protect every piece of data simultaneously. By identifying essential digital assets — such as financial, intellectual property or personnel data — businesses can focus their protective efforts.

Next is tracking IoAs. By implementing processes that help pinpoint common attack characteristics, teams are better prepared to respond when threats emerge. Common IoAs may include sudden upticks in specific data access requests, performance problems in widely used applications with no identifiable cause or an increased number of failed login attempts. Armed with this information, teams can better predict likely attack paths.

Finally, zero trust models can help provide a protective bulwark if attackers manage to compromise login and password data. By adopting an always-verify approach that uses a combination of behavioral and geographic data paired with strong authentication processes, businesses frustrate attackers at the final hurdle.

Function over form: Implementing new tools

While focusing on the outcome rather than the input of new attack vectors, enterprises can reduce security risk. But there’s also a case for implementing new tools such as AI and LLMs to help bolster cybersecurity efforts.

Consider generative AI tools. In the same ways they can help attackers create code that’s hard to detect and difficult to counter, GenAI can assist cybersecurity teams in analyzing and identifying common attack patterns, helping businesses focus their efforts on likely avenues of compromise. However, it’s worth noting that this identification isn’t effective if companies don’t have the endpoint visibility to understand where attacks are coming from and what systems are at risk.

In other words, implementing new tools isn’t a cure-all — they’re only effective when paired with solid security hygiene.

For better security, work smarter, not harder

Just as attackers can leverage new technologies to increase compromise efficacy, companies can leverage AI security to help defend against potential threats.

Malicious actors, however, can act with impunity. If AI-enhanced malware or LLM-reviewed phishing emails don’t work, they can simply return to the drawing board. For cybersecurity professionals, however, failure means compromised systems at best and stolen or ransomed data at worst.

The result? Security success depends on working smarter, not harder. This starts by getting back to basics: pinpointing critical data, tracking attacks and implementing tools that verify all users. It improves with the targeted use of AI. By leveraging solutions such as the IBM Security QRadar Suite, which features advanced AI threat intelligence, or the IBM Security Guardian, which offers built-in AI outlier detection, businesses are better prepared to counter current threats and reduce the risk of future compromise.

More from Risk Management

Working in the security clearance world: How security clearances impact jobs

2 min read - We recently published an article about the importance of security clearances for roles across various sectors, particularly those associated with national security and defense.But obtaining a clearance is only part of the journey. Maintaining and potentially expanding your clearance over time requires continued diligence and adherence to stringent guidelines.This brief explainer discusses the duration of security clearances, the recurring processes involved in maintaining them and possibilities for expansion, as well as the economic benefits of these credentialed positions.Duration of security…

Remote access risks on the rise with CVE-2024-1708 and CVE-2024-1709

4 min read - On February 19, ConnectWise reported two vulnerabilities in its ScreenConnect product, CVE-2024-1708 and 1709. The first is an authentication bypass vulnerability, and the second is a path traversal vulnerability. Both made it possible for attackers to bypass authentication processes and execute remote code.While ConnectWise initially reported that the vulnerabilities had proof-of-concept but hadn’t been spotted in the wild, reports from customers quickly made it clear that hackers were actively exploring both flaws. As a result, the company created patches for…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today