Even if we’re not always consciously aware of it, artificial intelligence is now all around us. We’re already used to personalized recommendation systems in e-commerce, customer service chatbots powered by conversational AI and a whole lot more. In the realm of information security, we’ve already been relying on AI-powered spam filters for years to protect us from malicious emails.

Those are all well-established use cases. However, since the meteoric rise of generative AI in the last few years, machines have become capable of so much more. From threat detection to incident response automation to testing employee awareness through simulated phishing emails, the AI opportunity in cybersecurity is indisputable.

But with any new opportunity comes new risks. Threat actors are now using AI to launch ever more convincing phishing attacks at a scale that wasn’t possible before. To keep ahead of the threats, those on the defensive lines also need AI, but its use must be transparent and with a central focus on ethics to avoid stepping into the realm of gray-hat tactics.

Now is the time for information security leaders to adopt responsible AI strategies.

Balancing privacy and safety in AI-powered security tools

Crime is a human problem, and cyber crime is no different. Technology, including generative AI, is simply another tool in an attacker’s arsenal. Legitimate companies train their AI models on vast swaths of data scraped from the internet. Not only are these models often trained on the creative efforts of millions of real people — there’s also a chance of them hoovering up personal information that’s ended up in the public domain, intentionally or unintentionally. As a result, some of the biggest AI model developers are now facing lawsuits, while the industry at large faces growing attention from regulators.

While threat actors care little for AI ethics, it’s easy for legitimate companies to unwittingly end up doing the same thing. Web-scraping tools, for instance, may be used to collect training data to create a model to detect phishing content. However, these tools might not make any distinction between personal and anonymized information — especially in the case of image content. Open-source data sets like LAION for images or The Pile for text have a similar problem. For example, in 2022, a Californian artist found that private medical photos taken by her doctor had ended up in the LAION-5B dataset used to train the popular open-source image synthesizer Stable Diffusion.

There’s no denying that the careless development of cybersecurity-verticalized AI models can lead to greater risk than not using AI at all. To prevent that from happening, security solution developers must maintain the highest standards of data quality and privacy, especially when it comes to anonymizing or safeguarding confidential information. Laws like Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), though developed before the rise of generative AI, serve as valuable guidelines for informing ethical AI strategies.

Explore AI cybersecurity solutions

An emphasis on privacy

Companies have been using machine learning to detect security threats and vulnerabilities long before the rise of generative AI. Systems powered by natural language processing (NLP), behavioral and sentiment analytics and deep learning are all well-established in these use cases. But they, too, present ethical conundrums where privacy and security can become competing disciplines.

For example, consider a company that uses AI to monitor employee browsing histories to detect insider threats. While this enhances security, it might also involve capturing personal browsing information — such as medical searches or financial transactions — that employees expect to stay private.

Privacy is also a concern in physical security. For instance, AI-driven fingerprint recognition might prevent unauthorized access to sensitive sites or devices, but it also involves collecting highly sensitive biometric data, which, if compromised, could cause long-lasting problems for the individuals concerned. After all, if your fingerprint data is hacked, you can’t exactly get a new finger. That’s why it’s imperative that biometric systems are kept under maximum security and backed up with responsible data retention policies.

Keeping humans in the loop for accountability in decision-making

Perhaps the most important thing to remember about AI is that, just like people, it can misstep in many different ways. One of the central tasks of adopting an ethical AI strategy is TEVV, or testing, evaluation, validation and verification. That’s especially the case in such a mission-critical area as cybersecurity.

Many of the risks that come with AI manifest themselves during the development process. For instance, the training data must undergo thorough TEVV for quality assurance, as well as to ensure that it hasn’t been manipulated. This is vital because data poisoning is now one of the number-one attack vectors deployed by more sophisticated cyber criminals.

Another issue inherent to AI — just as it is to people — is bias and fairness. For example, an AI tool used to flag malicious emails might target legitimate emails because they show signs of vernacular commonly associated with a specific cultural group. This results in unfair profiling and targeting of specific groups, raising concerns about unjust actions being taken.

The purpose of AI is to augment human intelligence, not to replace it. Machines can’t be held accountable if something goes wrong. It’s important to remember that AI does what humans train it to do. Because of this, AI inherits human biases and poor decision-making processes. The “black-box” nature of many AI models can also make it notoriously difficult to identify the root causes of such issues, simply because end users are given no insight into how AI comes up with the decisions it makes. These models lack the explainability critical for obtaining transparency and accountability in AI-driven decision-making.

Keep human interests central to AI development

Whether developing or engaging with AI — in cybersecurity or any other context — it’s essential to keep humans in the loop throughout the process. Training data must be regularly audited by diverse and inclusive teams and refined to reduce bias and misinformation. While people themselves are prone to the same problems, continuous supervision and the ability to explain how AI draws the conclusions it does can greatly mitigate these risks.

On the other hand, simply viewing AI as a shortcut and a human replacement inevitably results in AI evolving in its own way, being trained on its own outputs to the point it only amplifies its own shortcomings — a concept known as AI drift.

The human role in safeguarding AI and being accountable for its adoption and usage can’t be understated. That’s why, instead of focusing on AI as a way to reduce headcounts and save money, companies should invest any savings in retraining and transitioning their teams into new AI-adjacent roles. That means all information security professionals must put ethical AI usage (and thus people) first.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today