December 30, 2024 By Mark Stone 3 min read

At the end of 2024, we’ve reached a moment in artificial intelligence (AI) development where government involvement can help shape the trajectory of this extremely pervasive technology.

In the most recent example, the Department of Homeland Security (DHS) has released what it calls a “first-of-its-kind” framework designed to ensure the safe and secure deployment of AI across critical infrastructure sectors. The framework could be the catalyst for what could become a comprehensive set of regulatory measures, as it brings into focus the significant role AI will play in securing key infrastructure systems.

As Secretary Alejandro N. Mayorkas put it, “AI offers a once-in-a-generation opportunity to improve the strength and resilience of U.S. critical infrastructure, and we must seize it while minimizing its potential harms. The framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access and more.”

Mayorkas’ statement underscores the urgency of getting it right, as today’s decisions will profoundly shape how AI impacts vital systems in the future.

Key features of the DHS AI framework

The framework lays out clear roles and responsibilities for the parties involved in AI development and deployment for critical infrastructure.

Risk management guidance: DHS suggests an approach that incorporates ongoing risk management, advising stakeholders to continually identify, assess and mitigate potential AI risks. The recommendation includes adopting transparent mechanisms to track AI decisions that could impact essential services.

Ethical standards for developers: The guidelines stress the importance of incorporating ethical considerations into AI design, and make a push for responsible practices that minimize harm and ensure equitable treatment.

Collaboration across sectors: Recognizing the interconnected nature of infrastructure, DHS is promoting collaboration between public and private sectors to share best practices and vulnerabilities effectively. Information sharing is always a great way to minimize the risks brought about by both deliberate attacks and unintended failures.

Incident response preparedness: The framework also outlines how AI developers and operators should prepare for potential incidents; clear protocols must be in place to quickly address issues before they escalate.

Explore AI cybersecurity solutions

What are the responsibilities of AI developers?

One of the most notable aspects of the DHS report is the explicit focus on the responsibilities of AI developers.

The guidelines set a new precedent by outlining clear expectations, especially for those creating AI tools meant to operate in or interact with critical infrastructure.

This focus on developers is particularly important because they are at the forefront of creating technology that directly influences critical systems. The decisions made during the design, development and deployment phases can have significant consequences and impact everything from public safety to national security. By giving developers a structured set of responsibilities, DHS is hoping to create a culture of accountability and foresight in the AI community.

As such, AI developers are encouraged to take the following actions to align with the new guidelines.

Design with risk in mind: Developers are urged to build AI systems that prioritize safety and resilience from the ground up, especially when the technology is intended to interact with critical services like power grids or communication networks. This means integrating fail-safes, conducting stress tests and simulating potential failure scenarios during the design phase.

Adopt explainable AI practices: Transparency is crucial for AI developers. The framework urges the adoption of explainable AI techniques that allow human operators to understand why certain decisions were made. This practice boosts trust while also providing an audit trail that can be useful in identifying the root causes of any issues that arise.

Collaborate for broader impact: Developers should not just work alone but actively engage with a broader community of stakeholders, including policymakers, users and other tech creators. After all, collaboration helps ensure that AI tools are safe, reliable and ready to operate under real-world conditions.

By following these guidelines, developers can help build AI systems that meet technical standards and also align with societal values and safety requirements. The focus on explainable AI, risk-based design and collaboration creates a balanced approach that can maximize the benefits of AI and minimize its potential downsides.

Why does this matter now?

The release of the AI framework is a good reminder that AI technology is not evolving in a vacuum. Today, AI is more pervasive than ever before, but its use in critical infrastructure demands the highest level of care and responsibility. With the focus on developers as important players in minimizing risks, the DHS is creating an environment where AI can thrive without compromising essential public services.

It’s important to note that the responsibility for secure AI extends beyond the developer stage. Tech organizations will play a key role as well. Arvind Krishna, Chairman and CEO of IBM, says, “The DHS Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure is a powerful tool to help guide the responsible deployment of AI across America’s critical infrastructure, and IBM is proud to support its development. We look forward to continuing to work with the Department to promote shared and individual responsibilities in the advancement of trusted AI systems.”

Secretary Mayorkas echoes those sentiments, adding, “The choices organizations and individuals involved in creating AI make today will determine the impact this technology will have in our critical infrastructure tomorrow.”

The secretary’s words capture the essence of why this framework matters: We need to shape the future of AI in a way that protects and enhances the services that are foundational to our society.

More from News

Apple Intelligence raises stakes in privacy and security

3 min read - Apple’s latest innovation, Apple Intelligence, is redefining what’s possible in consumer technology. Integrated into iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1, this milestone puts advanced artificial intelligence (AI) tools directly in the hands of millions. Beyond being a breakthrough for personal convenience, it represents an enormous economic opportunity. But the bold step into accessible AI comes with critical questions about security, privacy and the risks of real-time decision-making in users’ most private digital spaces. AI in every pocket Having…

FYSA – Adobe Cold Fusion Path Traversal Vulnerability

2 min read - Summary Adobe has released a security bulletin (APSB24-107) addressing an arbitrary file system read vulnerability in ColdFusion, a web application server. The vulnerability, identified as CVE-2024-53961, can be exploited to read arbitrary files on the system, potentially leading to unauthorized access and data exposure. Threat Topography Threat Type: Arbitrary File System Read Industries Impacted: Technology, Software, and Web Development Geolocation: Global Environment Impact: Web servers running ColdFusion 2021 and 2023 are vulnerable Overview X-Force Incident Command is monitoring the disclosure…

Ransomware attack on Rhode Island health system exposes data of hundreds of thousands

3 min read - Rhode Island is grappling with the fallout of a significant ransomware attack that has compromised the personal information of hundreds of thousands of residents enrolled in the state’s health and social services programs. Officials confirmed the attack on the RIBridges system—the state’s central platform for benefits like Medicaid and SNAP—after hackers infiltrated the system on December 5, planting malicious software and threatening to release sensitive data unless a ransom is paid. Governor Dan McKee, addressing the media, called the attack…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today