April 10, 2024 By Jonathan Reed 4 min read

While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.

As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.

AI is also high on the list of United States government concerns. In late February, Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the establishment of a bipartisan Task Force on AI to explore how Congress can ensure that America continues to lead global AI innovation. The Task Force will also consider the guardrails required to safeguard the nation against current and emerging threats and to ensure the development of safe and trustworthy technology.

Clearly, good governance is essential to address AI-associated risks. But what does sound AI governance look like? A new IBM-featured case study by Gartner provides some answers. The study details how to establish a governance framework to manage AI ethics concerns. Let’s take a look.

Why AI governance matters

As businesses increasingly adopt AI into their everyday operations, the ethical use of the technology has become a hot topic. The problem is that organizations often rely on broad corporate principles, combined with legal or independent review boards, to assess the ethical risks of individual AI use cases.

However, according to the Gartner case study, AI ethical principles can be too broad or abstract. Then, project leaders struggle to decide whether individual AI use cases are ethical or not. Meanwhile, legal and review board teams lack visibility into how AI is actually being used in business. All this opens the door to unethical use (intentional or not) of AI and subsequent business and compliance risks.

Given the potential impact, the problem must first be addressed at a governance level. Then, subsequent organizational implementation with the appropriate checks and balances must follow.

Four core roles of AI governance framework

As per the case study, business and privacy leaders at IBM developed a governance framework to address ethical concerns surrounding AI projects. This framework is empowered by four core roles:

  1. Policy advisory committee: Senior leaders are responsible for determining global regulatory and public policy objectives, as well as privacy, data and technology ethics risks and strategies.

  2. AI ethics board: Co-chaired by the company’s global AI ethics leader from IBM Research and the chief privacy and trust officer, the Board comprises a cross-functional and centralized team that defines, maintains and advises about IBM’s AI ethics policies, practices and communications.

  3. AI ethics focal points: Each business unit has focal points (business unit representatives) who act as the first point of contact to proactively identify and assess technology ethics concerns, mitigate risks for individual use cases and forward projects to the AI Ethics Board for review. A large part of AI governance hinges upon these individuals, as we’ll see later.

  4. Advocacy network: A grassroots network of employees who promote a culture of ethical, responsible and trustworthy AI technology. These advocates contribute to open workstreams and help scale AI ethics initiatives throughout the organization.

Explore AI cybersecurity

Risk-based assessment criteria

If an AI ethics issue is identified, the Focal Point assigned to the use case’s business unit will initiate an assessment. The Focal Point executes this process on the front lines, which enables the triage of low-risk cases. For higher-risk cases, a formal risk assessment is completed and escalated to the AI Ethics Board for review.

Each use case is evaluated using guidelines including:

  • Associated properties and intended use: Investigates the nature, intended use and risk level of a particular use case. Could the use case cause harm? Who is the end user? Are any individual rights being violated?

  • Regulatory compliance: Determines whether data will be handled safely and in accordance with applicable privacy laws and industry regulations.

  • Previously reviewed use cases: Provides insights and next steps from use cases previously reviewed by the AI Ethics Board. Includes a list of AI use cases that require the board’s approval.

  • Alignment with AI ethics principles: Determines whether use cases meet foundational requirements, such as alignment with principles of fairness, transparency, explainability, robustness and privacy.

Benefits of an AI governance framework

According to the Gartner report, the implementation of an AI governance framework benefited IBM by:

  • Scaling AI ethics: Focal points drive compliance and initiate reviews in their respective business units, which enables an AI ethics review at scale.

  • Increasing strategic alignment of AI ethics vision: Focal points connect with technical, thought and business leaders in the AI ethics space throughout the business and across the globe.

  • Expediting completion of low-risk projects and proposals: By triaging low-risk services or projects, focal points enable the capability to review projects faster.

  • Enhancing board readiness and preparedness: By empowering focal points to guide AI ethics early in the process, the AI Ethics Board can review any remaining use cases more efficiently.

With great power comes great responsibility

When ChatGPT debuted in June 2020, the entire world was abuzz with wild expectations. Now, current AI trends point towards more realistic expectations about the technology. Standalone tools like ChatGPT may capture popular imagination, but effective integration into established services will engender more profound change across industries.

Undoubtedly, AI opens the door to powerful new tools and techniques to get work done. However, the associated risks are real as well. Elevated multimodal AI capabilities and lowered barriers to entry also invite abuse: deepfakes, privacy issues, perpetuation of bias and even evasion of CAPTCHA safeguards may become increasingly easy for threat groups.

While bad actors are already using AI, the legitimate business world must also take preventative measures to keep employees, customers and communities safe.

ChatGPT says, “Negative consequences might encompass biases perpetuated by AI algorithms, breaches of privacy, exacerbation of societal inequalities or unintended harm to individuals or communities. Additionally, there could be implications for trust, reputation damage or legal ramifications stemming from unethical AI practices.”

To protect against these types of risks, AI ethics governance is essential.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today