The metaverse, artificial intelligence (AI) run amok, the singularity … many far-out situations have become a dinner-table conversation. Will AI take over the world? Will you one day have a computer chip in your brain? These science fiction ideas may never come to fruition, but some do point to existing security risks.

While nobody can predict the future, should we worry about any of these issues? What’s the difference between a real threat and hype?

The promise of the metaverse

If you asked 10 tech-minded people to define the metaverse, you might get 10 different answers. Some say it’s a digital place where advanced virtual reality (VR) technology creates an immersive experience. Others say it’s a life in which you could spend 24 hours a day online working, socializing, shopping and enjoying yourself.

The truth is some people already spend way too much time online. In fact, the typical global internet user spends almost 7 hours a day with some kind of device.

Metaverse meets reality

The problem with the metaverse is that a truly immersive experience requires more than just a fancy VR headset. How do you run or wander around in a digital space? You either need a lot of space or a highly advanced, multidirectional treadmill.

You might consider planting a chip in your brain to trick you into living in another world. But we’re still a long way from that reality. Some early experiments with chips in monkey brains have turned out to be fatal.

What unsettles us most about ideas like this? It might not be a physical intrusion. Perhaps we might fear missing out on an event or opportunity. Also, we fear that the technology could get out of control.

Before you go rushing out to buy virtual real estate, be aware the average value of NFTs, ‘unique’ digital objects that saw sales in the millions in 2021, fell 83% from January to March 2022. Some predict that this kind of digital marketplace will never break out of its niche nature.

And out-of-control technology? Perhaps it’s already upon us.

The danger of AI

Elon Musk, who also funded the experiments with brain implants in monkeys, has famously warned about the grave dangers of AI. While this topic has kicked off a heated debate, the reality is that threat actors are already using AI.

Take AI-driven phishing attacks. Powered by AI, attackers can target phishing emails to certain segments of employees or specific executives in a practice known as ‘spear phishing’. However, attackers didn’t invent this. Instead, digital marketing started it to capture more business. We’ve all received targeted emails from marketing engines for years.

Attackers show a keen interest in AI tools that speed up email creation and distribution. Also, attackers or honest workers can use AI to identify high-value targets using data from online bios, emails, news reports and social media. It’s simply automated marketing adapted for attackers.

AI-powered malware

Once they trick you into downloading an infected file, an AI-infused malware payload could be unleashed on your servers. The theory says malware could analyze network traffic to blend in with normal communications. AI-powered malware could one day learn how to target high-value endpoints instead of grinding through a long list of targets. The attacker could also equip the malware with a self-destruct or self-pause mechanism to avoid anti-malware or sandboxing detection.

Who needs AI-powered malware anyway?

If you’re worried about AI-powered attacks, consider a recent case published by the UK National Cyber Security Centre. They reported an organization paid a ransom of nearly £6.5 million ($8.6 million) to decrypt their stolen files. But the company made no effort to discover the cause of the breach. Less than two weeks later, the same attacker got into the network again, using the exact same ransomware tactics. The victim felt they had no other option but to pay the ransom again.

If a company’s current security standards are sub-par, threat actors don’t need highly sophisticated tools for intrusion.

Fight fire with fire

In the meantime, advanced security solutions use AI to deter threats. The reasons are simple. To secure large attack surfaces and defend against rising attack rates, AI is the logical choice to monitor and secure massive amounts of data. Under-resourced security operations benefit greatly from AI to stay ahead of threats. AI can help with threat detection accuracy, investigation acceleration and response automation.

AI-driven security protection works now

AI-infused security tools help defenders speed up their response to cyber attacks. In some cases, with AI assistance, they can speed up threat investigation by up to 60 times.

According to IBM’s latest data breach cost report, the use of AI and automation is the single most impactful factor in reducing the time to detect and respond to cyberattacks. It also has the greatest impact on reducing the cost of a data breach.

Today’s security operators struggle to keep pace with the malicious actors, even without criminals using futuristic AI tools. The best strategy is to proactively close gaps and equip security teams with machine learning and automation tools to level the playing field.

What future do you want?

Beyond the current threats, we still wonder about the future. Chips in people’s brains are certainly a long way off. In the meantime, there are plenty of threats that exist today, but we also have the means to thwart them.

The metaverse may or may not come to pass as some envision it. Maybe it will just be another online destination where some people spend their time. Would you rather put on a complex sensor-laden suit, strap on headgear and connect with friends online or get together with them at a real location where you are free from the trappings of tech?

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today