The metaverse, artificial intelligence (AI) run amok, the singularity … many far-out situations have become a dinner-table conversation. Will AI take over the world? Will you one day have a computer chip in your brain? These science fiction ideas may never come to fruition, but some do point to existing security risks.

While nobody can predict the future, should we worry about any of these issues? What’s the difference between a real threat and hype?

The promise of the metaverse

If you asked 10 tech-minded people to define the metaverse, you might get 10 different answers. Some say it’s a digital place where advanced virtual reality (VR) technology creates an immersive experience. Others say it’s a life in which you could spend 24 hours a day online working, socializing, shopping and enjoying yourself.

The truth is some people already spend way too much time online. In fact, the typical global internet user spends almost 7 hours a day with some kind of device.

Metaverse meets reality

The problem with the metaverse is that a truly immersive experience requires more than just a fancy VR headset. How do you run or wander around in a digital space? You either need a lot of space or a highly advanced, multidirectional treadmill.

You might consider planting a chip in your brain to trick you into living in another world. But we’re still a long way from that reality. Some early experiments with chips in monkey brains have turned out to be fatal.

What unsettles us most about ideas like this? It might not be a physical intrusion. Perhaps we might fear missing out on an event or opportunity. Also, we fear that the technology could get out of control.

Before you go rushing out to buy virtual real estate, be aware the average value of NFTs, ‘unique’ digital objects that saw sales in the millions in 2021, fell 83% from January to March 2022. Some predict that this kind of digital marketplace will never break out of its niche nature.

And out-of-control technology? Perhaps it’s already upon us.

The danger of AI

Elon Musk, who also funded the experiments with brain implants in monkeys, has famously warned about the grave dangers of AI. While this topic has kicked off a heated debate, the reality is that threat actors are already using AI.

Take AI-driven phishing attacks. Powered by AI, attackers can target phishing emails to certain segments of employees or specific executives in a practice known as ‘spear phishing’. However, attackers didn’t invent this. Instead, digital marketing started it to capture more business. We’ve all received targeted emails from marketing engines for years.

Attackers show a keen interest in AI tools that speed up email creation and distribution. Also, attackers or honest workers can use AI to identify high-value targets using data from online bios, emails, news reports and social media. It’s simply automated marketing adapted for attackers.

AI-powered malware

Once they trick you into downloading an infected file, an AI-infused malware payload could be unleashed on your servers. The theory says malware could analyze network traffic to blend in with normal communications. AI-powered malware could one day learn how to target high-value endpoints instead of grinding through a long list of targets. The attacker could also equip the malware with a self-destruct or self-pause mechanism to avoid anti-malware or sandboxing detection.

Who needs AI-powered malware anyway?

If you’re worried about AI-powered attacks, consider a recent case published by the UK National Cyber Security Centre. They reported an organization paid a ransom of nearly £6.5 million ($8.6 million) to decrypt their stolen files. But the company made no effort to discover the cause of the breach. Less than two weeks later, the same attacker got into the network again, using the exact same ransomware tactics. The victim felt they had no other option but to pay the ransom again.

If a company’s current security standards are sub-par, threat actors don’t need highly sophisticated tools for intrusion.

Fight fire with fire

In the meantime, advanced security solutions use AI to deter threats. The reasons are simple. To secure large attack surfaces and defend against rising attack rates, AI is the logical choice to monitor and secure massive amounts of data. Under-resourced security operations benefit greatly from AI to stay ahead of threats. AI can help with threat detection accuracy, investigation acceleration and response automation.

AI-driven security protection works now

AI-infused security tools help defenders speed up their response to cyber attacks. In some cases, with AI assistance, they can speed up threat investigation by up to 60 times.

According to IBM’s latest data breach cost report, the use of AI and automation is the single most impactful factor in reducing the time to detect and respond to cyberattacks. It also has the greatest impact on reducing the cost of a data breach.

Today’s security operators struggle to keep pace with the malicious actors, even without criminals using futuristic AI tools. The best strategy is to proactively close gaps and equip security teams with machine learning and automation tools to level the playing field.

What future do you want?

Beyond the current threats, we still wonder about the future. Chips in people’s brains are certainly a long way off. In the meantime, there are plenty of threats that exist today, but we also have the means to thwart them.

The metaverse may or may not come to pass as some envision it. Maybe it will just be another online destination where some people spend their time. Would you rather put on a complex sensor-laden suit, strap on headgear and connect with friends online or get together with them at a real location where you are free from the trappings of tech?

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today