The metaverse, artificial intelligence (AI) run amok, the singularity … many far-out situations have become a dinner-table conversation. Will AI take over the world? Will you one day have a computer chip in your brain? These science fiction ideas may never come to fruition, but some do point to existing security risks.

While nobody can predict the future, should we worry about any of these issues? What’s the difference between a real threat and hype?

The promise of the metaverse

If you asked 10 tech-minded people to define the metaverse, you might get 10 different answers. Some say it’s a digital place where advanced virtual reality (VR) technology creates an immersive experience. Others say it’s a life in which you could spend 24 hours a day online working, socializing, shopping and enjoying yourself.

The truth is some people already spend way too much time online. In fact, the typical global internet user spends almost 7 hours a day with some kind of device.

Metaverse meets reality

The problem with the metaverse is that a truly immersive experience requires more than just a fancy VR headset. How do you run or wander around in a digital space? You either need a lot of space or a highly advanced, multidirectional treadmill.

You might consider planting a chip in your brain to trick you into living in another world. But we’re still a long way from that reality. Some early experiments with chips in monkey brains have turned out to be fatal.

What unsettles us most about ideas like this? It might not be a physical intrusion. Perhaps we might fear missing out on an event or opportunity. Also, we fear that the technology could get out of control.

Before you go rushing out to buy virtual real estate, be aware the average value of NFTs, ‘unique’ digital objects that saw sales in the millions in 2021, fell 83% from January to March 2022. Some predict that this kind of digital marketplace will never break out of its niche nature.

And out-of-control technology? Perhaps it’s already upon us.

The danger of AI

Elon Musk, who also funded the experiments with brain implants in monkeys, has famously warned about the grave dangers of AI. While this topic has kicked off a heated debate, the reality is that threat actors are already using AI.

Take AI-driven phishing attacks. Powered by AI, attackers can target phishing emails to certain segments of employees or specific executives in a practice known as ‘spear phishing’. However, attackers didn’t invent this. Instead, digital marketing started it to capture more business. We’ve all received targeted emails from marketing engines for years.

Attackers show a keen interest in AI tools that speed up email creation and distribution. Also, attackers or honest workers can use AI to identify high-value targets using data from online bios, emails, news reports and social media. It’s simply automated marketing adapted for attackers.

AI-powered malware

Once they trick you into downloading an infected file, an AI-infused malware payload could be unleashed on your servers. The theory says malware could analyze network traffic to blend in with normal communications. AI-powered malware could one day learn how to target high-value endpoints instead of grinding through a long list of targets. The attacker could also equip the malware with a self-destruct or self-pause mechanism to avoid anti-malware or sandboxing detection.

Who needs AI-powered malware anyway?

If you’re worried about AI-powered attacks, consider a recent case published by the UK National Cyber Security Centre. They reported an organization paid a ransom of nearly £6.5 million ($8.6 million) to decrypt their stolen files. But the company made no effort to discover the cause of the breach. Less than two weeks later, the same attacker got into the network again, using the exact same ransomware tactics. The victim felt they had no other option but to pay the ransom again.

If a company’s current security standards are sub-par, threat actors don’t need highly sophisticated tools for intrusion.

Fight fire with fire

In the meantime, advanced security solutions use AI to deter threats. The reasons are simple. To secure large attack surfaces and defend against rising attack rates, AI is the logical choice to monitor and secure massive amounts of data. Under-resourced security operations benefit greatly from AI to stay ahead of threats. AI can help with threat detection accuracy, investigation acceleration and response automation.

AI-driven security protection works now

AI-infused security tools help defenders speed up their response to cyber attacks. In some cases, with AI assistance, they can speed up threat investigation by up to 60 times.

According to IBM’s latest data breach cost report, the use of AI and automation is the single most impactful factor in reducing the time to detect and respond to cyberattacks. It also has the greatest impact on reducing the cost of a data breach.

Today’s security operators struggle to keep pace with the malicious actors, even without criminals using futuristic AI tools. The best strategy is to proactively close gaps and equip security teams with machine learning and automation tools to level the playing field.

What future do you want?

Beyond the current threats, we still wonder about the future. Chips in people’s brains are certainly a long way off. In the meantime, there are plenty of threats that exist today, but we also have the means to thwart them.

The metaverse may or may not come to pass as some envision it. Maybe it will just be another online destination where some people spend their time. Would you rather put on a complex sensor-laden suit, strap on headgear and connect with friends online or get together with them at a real location where you are free from the trappings of tech?

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today