The metaverse, artificial intelligence (AI) run amok, the singularity … many far-out situations have become a dinner-table conversation. Will AI take over the world? Will you one day have a computer chip in your brain? These science fiction ideas may never come to fruition, but some do point to existing security risks.

While nobody can predict the future, should we worry about any of these issues? What’s the difference between a real threat and hype?

The Promise of the Metaverse

If you asked 10 tech-minded people to define the metaverse, you might get 10 different answers. Some say it’s a digital place where advanced virtual reality (VR) technology creates an immersive experience. Others say it’s a life in which you could spend 24 hours a day online working, socializing, shopping and enjoying yourself.

The truth is some people already spend way too much time online. In fact, the typical global internet user spends almost 7 hours a day with some kind of device.

Metaverse Meets Reality

The problem with the metaverse is that a truly immersive experience requires more than just a fancy VR headset. How do you run or wander around in a digital space? You either need a lot of space or a highly advanced, multidirectional treadmill.

You might consider planting a chip in your brain to trick you into living in another world. But we’re still a long way from that reality. Some early experiments with chips in monkey brains have turned out to be fatal.

What unsettles us most about ideas like this? It might not be a physical intrusion. Perhaps we might fear missing out on an event or opportunity. Also, we fear that the technology could get out of control.

Before you go rushing out to buy virtual real estate, be aware the average value of NFTs, ‘unique’ digital objects that saw sales in the millions in 2021, fell 83% from January to March 2022. Some predict that this kind of digital marketplace will never break out of its niche nature.

And out-of-control technology? Perhaps it’s already upon us.

The Danger of AI

Elon Musk, who also funded the experiments with brain implants in monkeys, has famously warned about the grave dangers of AI. While this topic has kicked off a heated debate, the reality is that threat actors are already using AI.

Take AI-driven phishing attacks. Powered by AI, attackers can target phishing emails to certain segments of employees or specific executives in a practice known as ‘spear phishing’. However, attackers didn’t invent this. Instead, digital marketing started it to capture more business. We’ve all received targeted emails from marketing engines for years.

Attackers show a keen interest in AI tools that speed up email creation and distribution. Also, attackers or honest workers can use AI to identify high-value targets using data from online bios, emails, news reports and social media. It’s simply automated marketing adapted for attackers.

AI-Powered Malware

Once they trick you into downloading an infected file, an AI-infused malware payload could be unleashed on your servers. The theory says malware could analyze network traffic to blend in with normal communications. AI-powered malware could one day learn how to target high-value endpoints instead of grinding through a long list of targets. The attacker could also equip the malware with a self-destruct or self-pause mechanism to avoid anti-malware or sandboxing detection.

Who Needs AI-Powered Malware Anyway?

If you’re worried about AI-powered attacks, consider a recent case published by the UK National Cyber Security Centre. They reported an organization paid a ransom of nearly £6.5 million ($8.6 million) to decrypt their stolen files. But the company made no effort to discover the cause of the breach. Less than two weeks later, the same attacker got into the network again, using the exact same ransomware tactics. The victim felt they had no other option but to pay the ransom again.

If a company’s current security standards are sub-par, threat actors don’t need highly sophisticated tools for intrusion.

Fight Fire With Fire

In the meantime, advanced security solutions use AI to deter threats. The reasons are simple. To secure large attack surfaces and defend against rising attack rates, AI is the logical choice to monitor and secure massive amounts of data. Under-resourced security operations benefit greatly from AI to stay ahead of threats. AI can help with threat detection accuracy, investigation acceleration and response automation.

AI-Driven Security Protection Works Now

AI-infused security tools help defenders speed up their response to cyber attacks. In some cases, with AI assistance, they can speed up threat investigation by up to 60 times.

According to IBM’s latest data breach cost report, the use of AI and automation is the single most impactful factor in reducing the time to detect and respond to cyberattacks. It also has the greatest impact on reducing the cost of a data breach.

Today’s security operators struggle to keep pace with the malicious actors, even without criminals using futuristic AI tools. The best strategy is to proactively close gaps and equip security teams with machine learning and automation tools to level the playing field.

What Future Do You Want?

Beyond the current threats, we still wonder about the future. Chips in people’s brains are certainly a long way off. In the meantime, there are plenty of threats that exist today, but we also have the means to thwart them.

The metaverse may or may not come to pass as some envision it. Maybe it will just be another online destination where some people spend their time. Would you rather put on a complex sensor-laden suit, strap on headgear and connect with friends online or get together with them at a real location where you are free from the trappings of tech?

More from Artificial Intelligence

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Unmasking hypnotized AI: The hidden risks of large language models

11 min read - The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it's important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…

Artificial intelligence threats in identity management

4 min read - The 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures: 68% are concerned about insider threats from employee layoffs and churn 99% expect some type of identity compromise driven by financial cutbacks, geopolitical factors, cloud applications and hybrid work environments 74% are concerned about confidential data loss through employees, ex-employees and third-party vendors. Additionally, many feel digital identity proliferation is on the rise and the attack surface is…

AI reduces data breach lifecycles and costs

3 min read - The cybersecurity tools you implement can make a difference in the financial future of your business. According to the 2023 IBM Cost of a Data Breach report, organizations using security AI and automation incurred fewer data breach costs compared to businesses not using AI-based cybersecurity tools. The report found that the more an organization uses the tools, the greater the benefits reaped. Organizations that extensively used AI and security automation saw an average cost of a data breach of $3.60…