April 25, 2024 By Josh Nadeau 4 min read

Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.

However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success of this new government-mandated project will depend on NIST’s ability to overcome these unique challenges while relying on strong partnerships and new business funding initiatives.

The growing concern about AI-powered cyber threats

AI’s notable entry into the business world and our personal lives has been met with great positivity regarding its potential application. Now, more and more organizations are adopting technology to inject new levels of efficiency and automation into their operations.

However, a much darker side of this new disruptive technology has continued to escalate in severity over the years. AI technology has become a core component of many modern cybersecurity threats, allowing for highly adaptable and effective methods for orchestrating attacks.

The introduction of newer technologies like AI into cyber criminals’ arsenal has led to statistics projecting that cyberattack damages will be expected to reach $10.5 trillion annually by 2025. Much of this growing trend can be attributed to the sheer scale of AI-driven attacks now taking place.

Unlike traditional attack methods that heavily relied on human intervention when planning and executing various attack vectors, AI technology has allowed cyber criminals to operate on a much more autonomous and anonymous scale. This includes deploying automated vulnerability discovery software that leads to quicker development of zero-day exploits and successful malware injections.

Deepfake threats are another product of AI technology that can have significant political and economic consequences. Manipulated audio and video recordings that are becoming more believable every day can lead to several security issues and can even play a role in steering political elections or compromising critical infrastructure.

These growing concerns have given governments a higher priority in prioritizing new initiatives focused on placing more control over how AI technology is used and regulated.

Learn more about AI cybersecurity

NIST’s tall order and what it entails

NIST, originally founded in 1901 as the National Bureau of Standards, has operated for over a century in the U.S. Department of Commerce with a mission to help promote higher standards when using science and technology to improve the security and quality of life for everyone.

In an effort to continuously improve on these initiatives, the Biden administration announced in June of last year that NIST would focus its efforts on a new project building off of the NIST AI Risk Management Framework to help address and regulate the rapid growth of AI technology.

As part of the new project, NIST will be extending its investigatory scope past just cybersecurity and address the global risks associated with the misuse of AI technology in society. This includes designing highly complex testing protocols to ensure the use of this new technology is both ethical and maintains the right level of security to avoid being misused.

One of the main subjects that NIST will be focusing on over the coming months is the rise of generative AI due to its fast adoption rate in business environments around the world. In support of this effort, NIST will be working with other organizations to develop new standards and best practices for the responsible development and use of it in commercial settings.

What challenges is NIST facing in the fight against AI-driven cyberattacks?

Although NIST has been viewed by many as playing a critical role in ensuring better security practices for all industries and sectors, the path in front of them hasn’t been easy to walk. NIST is currently facing major financial issues impacting its ability to see its mission through.

For several years now, the 123-year-old government building that houses NIST’s R&D department has been in a state of disrepair, with rain leaks and mold becoming a major issue. Budget constraints have been the main culprit associated with these issues and new suggested government plans would show a further 10% reduction in the organization’s funding.

Considering the ambitious plan instituted by the Biden administration, the future of this initiative looks like it could be in jeopardy without certain forms of intervention. Insufficient funding will restrict the scope and scale that NIST can undertake and may lead to delays when introducing new essential security tools and guidelines.

With AI technology increasing its momentum, NIST is at a crossroads when it comes to finding the necessary support it will need to keep pace with the growing threat of highly advanced security threats.

NIST looking forward

Addressing NIST’s funding issue is one of the most important challenges the organization is facing right now.

Increased financial support — either through federal funding initiatives or through public and private partnerships — can play a big part in opening the doors to bringing in more qualified talent to assist. This includes the ability to work with top security researchers and engineers who can help accelerate the discovery and mitigation of specific AI risks.

In addition to receiving funding, NIST will benefit greatly by joining collaborative forces with other industry leaders in the security and technology sector. Many organizations like Google and Amazon have been leaders in AI technology adoption and have funded their own security initiatives surrounding its use.

While the long-term success of NIST’s new mission will no doubt depend on improving its current funding situation, we may start to see some significant improvements in how AI technology is safely used across various industries as more organizations recognize the importance of NIST’s work and lend their support.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today