Artificial intelligence (AI) excels at finding patterns like unusual human behavior or abnormal incidents. It can also reflect human flaws and inconsistencies, including 180 known types of bias. Biased AI is everywhere, and like humans, it can discriminate against gender, race, age, disability and ideology.
AI bias has enormous potential to negatively affect women, minorities, the disabled, the elderly and other groups. Computer vision has more issues with false-positive facial identification for women and people of color, according to research by MIT and Stanford University. A recent ACLU experiment discovered that nearly 17 percent of professional athlete photos were falsely matched to mugshots in an arrest database.
Biased algorithms are linked to discrimination in hiring practices, performance management and mortgage lending. Consumer AI products frequently contain microinequities that create barriers for users based on gender, age, language, culture and other factors.
Sixty-three percent of organizations will deploy artificial intelligence in at least one area of cybersecurity this year, according to Capgemini. AI can scale security and augment human skills, but it can also create risks. Cybersecurity AI requires diverse data and context to act effectively, which is only possible with diverse cyber teams who recognize subtle examples of bias in security algorithms. The cybersecurity diversity problem isn’t new, but it’s about to create huge issues with biased cybersecurity AI if left unchecked.
3 Ways That Cybersecurity’s Diversity Problem Is Linked to Biased AI
1. Biased Business Rules
“Put simply, AI has the same vulnerabilities as people do,” wrote Greg Freiherr for Imaging Technology News. Algorithms are built on sets of business logic — rules written by humans. AI can be developed to perpetuate deliberate bias or, more often, it mirrors unconscious human assumptions about security risks.
Everyone has unconscious biases that inform judgment and decision-making, including AI developers. Humans tend to have a shallow understanding of other demographics and cultural groups, and the resulting prejudices can shape AI logic for security in many areas, including traffic filtering and user authentication. Language biases can shape natural language processing (NLP) rules, including spam filtering.
Business logic is a permanent part of an AI’s DNA, no matter how much training data is used. Even machine learning (ML) algorithms built for deep learning can’t escape built-in biases. “Biased rules within algorithms inevitably generate biased outcomes,” wrote IBM Security VP Aarti Borkar for Fast Company.
2. Narrow Training Data
An AI’s decision-making abilities are only as effective as its training data. Data is neutral until it is filtered through human bias. By the time data reaches an algorithm, there are usually strong traces of human prejudice. Bias can be created by preprocessing teams through a variety of factors, such as data classifiers, sampling decisions and the weight assigned to training data.
Biased training data can corrupt security outcomes. Anti-biased preprocessing is necessary to ensure adequate sampling, classification and representation.
3. Similar Human Collaborators
Humans and technology have become cybersecurity collaborators. Cybersecurity contributors train AI to create better security outcomes through a lens of personal knowledge and experience, but humans can quickly contribute to algorithm bias, especially in teams with poor diversity. Varied perspectives are needed to leverage cybersecurity AI in fair, balanced ways.
It’s Time to Address Cybersecurity Diversity and AI Bias
Diverse teams can recognize the specific risks of biased AI and minimize its impact. Cognitive diversity can contribute to the production of fair algorithms, help curate balanced training data and enable the supervision of secure AI.
CISOs need to create more internal diversity, but getting there isn’t going to be easy. It’s time to collaborate on the issues that perpetuate biased security culture and flawed AI. The problem is too complex for one person or strategy to solve alone.
Create the Right Conditions for Cognitive Diversity
Hiring and internal promotions don’t guarantee cognitive diversity. Security leaders need to create an inclusive workplace culture. Getting newly hired talent up-to-speed on AI can require training and likely a reevaluation of existing learning strategies as well. Microinequities are prevalent in corporate learning programs, so maintaining an equal playing field means implementing accommodations for learners with varied languages, cultures, ages and levels of dependency on assistive technologies.
Once newly hired or promoted talent is trained, it’s time to figure out how to retain women, minorities and other candidates, as well as how to remove any barriers to their success. Women in security are disproportionately likely to feel stressed at work and leave the industry, and both women and minorities receive lower wages and fewer promotions.
Biased performance management practices are part of the problem, as workplace cultures and policies can be especially detrimental to women and minorities. For example, an absence of flex-time policies can disproportionately hurt women.
Equal pay and equal opportunity are needed to retain and engage with diverse perspectives. The security industry desperately needs to double-down on creating a culture of self-care and inclusion. Removing barriers can improve anti-bias efforts and produce other positive effects as well. After analyzing 4,000 companies’ data, researcher Katica Roy discovered organizations that “move the needle closer to gender equity” even experience an increase in revenue.
Create Cross-Functional Momentum and Change
Women in cybersecurity are dramatically underrepresented, especially in light of their overall workforce participation. However, true cognitive diversity may require significant changes around policy and culture. CISOs face the dual challenge of fixing cyber team culture and starting cross-functional conversations about equity. Collaboration between security, HR, risk, IT and other functions can create ripples of change that lead to more inclusive hiring, performance management and policies.
Govern AI Bias
“Bias is nothing new. Humans [have bias] all of the time,” AI strategist Colin Priest, vice president of AI Strategy at DataRobot, told Information Week. “The difference with AI is that it happens at a bigger scale and it’s measurable.”
It’s probably impossible to create artificial intelligence without any biases. A risk-based approach to governing AI bias is the most practical solution. The first step is to create a clear framework of what’s “fair,” which shouldn’t be filtered through a narrow lens of experience. Individuals with diverse perspectives on AI, technology, data, ethics and diversity need to collaborate on governance.
Remember, “minimize” is not the same as “remove.” A risk-based framework is the only pragmatic way to put AI governance into practice. Resources should be directed toward mitigating biases in artificial intelligence with the greatest potential impact on security, reputation and users.
Priest recommends creating a “job description” for AI to assess risks. This isn’t a sign that robots are coming for human jobs. Rather, position descriptions are a solid baseline for understanding the purpose of cybersecurity AI and creating performance metrics. Measurement against KPIs is an important part of any governance strategy. Monitoring AI can prevent biases that slowly degrade the performance of cyber algorithms.
Balance Competing Perspectives
Checking personal biases is rarely comfortable. Industry leaders have a biased perspective on AI innovation, especially compared to regulators and researchers who focus on safety. True cognitive diversity can create uncomfortable friction between competing values and perspectives. However, a truly balanced solution to AI bias is going to require collaboration between industries, academia and the government.
IBM Chair and CEO Ginni Rometty recently called for “precision regulation” and better collaboration between AI stakeholders on CNBC. For example, legislation could determine “how the technology is used” instead of AI capabilities or characteristics.
“You want to have innovation flourish and you’ve got to balance that with [AI] security,” said Rometty.
Alphabet CEO Sundar Pichai recently expressed a similar point of view, asking European regulators to consider a “proportionate” approach.
Creating more effective frameworks for AI anti-bias and safety means being open to conflicting ideas. Security leaders should prepare for productive friction, and more importantly, join global efforts to create better frameworks. Industry perspectives are critical to supporting the IEEE, the European Commission and others in their efforts to create suitable frameworks.
Manage Third-Party Bias
Third-party data can be a valuable tool for cybersecurity AI, but it’s not risk-free. Your organization could be absorbing the risk of third-party biases embedded in training data.
“Organizations will be held responsible for what their AIs do, like they are responsible for what their employees do,” wrote Lisa Morgan for Information Week. Knowing your data vendor’s methodology and efforts to mitigate training data bias is crucial. Anti-bias governance must include oversight into third-party data sources and partnerships.
Invest in a Diverse Talent Pipeline
It’s officially time to target the talent pipeline and cybersecurity diversity. Women are dramatically underrepresented in cybersecurity. According to UNESCO, gender diversity rates drop even lower among cyber leadership and roles at the forefront of technology, such as those in cybersecurity AI. Minorities have fewer opportunities for equal pay and opportunities.
The opportunity gap starts early. According to UNESCO, girls have 25 percent fewer basic tech skills than their male peers. Creating a fair future for artificial intelligence and a diverse talent pipeline requires that everyone pitch in, including industry security leaders. Everyone benefits from efforts to create a more skilled, confident pipeline of diverse cyber talent. Nonprofits, schools and educational groups need help closing the STEM skill and interest gap.
Look to Outside Resources
Creating more diverse cyber teams isn’t a goal that can be accomplished overnight. In the meantime, security teams can gain diverse new perspectives by collaborating with nonprofits like Women in Identity, CyberReach and WiCys.
Frameworks, tools and third-party experts can help minimize bias as organizations work toward better talent diversity. Open-source libraries like AI Fairness 360 can identify, measure and limit the business impact of biased algorithms. AI implementation experts can also provide experience and context for more balanced security AI.
Cybersecurity AI Should Never Cause Collateral Damage
Last fall, Emily Ackerman almost collided with a grocery delivery robot at her graduate school campus. She survived, but she had to force her wheelchair off a ramp and onto a curb. AI developers hadn’t taught the robot to avoid wheelchairs, which put “disabled people on the line as collateral.”
“Designing something that’s universal is an extremely difficult task,” said Ackerman.”But, getting the shorter end of the stick isn’t a fun experience.”
Sometimes, AI bias can even reinforce harmful stereotypes. According to UNESCO research, until recently, a mobile voice assistant didn’t get offended by vicious, gender-based insults. Instead of pushing back, she said, “I’d blush if I could!” In more extreme instances of bias like Ackerman’s experience, AI can be life-threatening.
Cognitive diversity can create better security. Diverse teams have diverse ideas and broad understandings of risk, and varied security perspectives can balance AI bias and improve security posture. Investing in artificial intelligence alone isn’t enough to solve sophisticated machine learning attacks or nation-state threat actors — diverse human perspectives are the only way to prepare for the security challenges of today and tomorrow.