April 25, 2024 By Josh Nadeau 4 min read

Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.

However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success of this new government-mandated project will depend on NIST’s ability to overcome these unique challenges while relying on strong partnerships and new business funding initiatives.

The growing concern about AI-powered cyber threats

AI’s notable entry into the business world and our personal lives has been met with great positivity regarding its potential application. Now, more and more organizations are adopting technology to inject new levels of efficiency and automation into their operations.

However, a much darker side of this new disruptive technology has continued to escalate in severity over the years. AI technology has become a core component of many modern cybersecurity threats, allowing for highly adaptable and effective methods for orchestrating attacks.

The introduction of newer technologies like AI into cyber criminals’ arsenal has led to statistics projecting that cyberattack damages will be expected to reach $10.5 trillion annually by 2025. Much of this growing trend can be attributed to the sheer scale of AI-driven attacks now taking place.

Unlike traditional attack methods that heavily relied on human intervention when planning and executing various attack vectors, AI technology has allowed cyber criminals to operate on a much more autonomous and anonymous scale. This includes deploying automated vulnerability discovery software that leads to quicker development of zero-day exploits and successful malware injections.

Deepfake threats are another product of AI technology that can have significant political and economic consequences. Manipulated audio and video recordings that are becoming more believable every day can lead to several security issues and can even play a role in steering political elections or compromising critical infrastructure.

These growing concerns have given governments a higher priority in prioritizing new initiatives focused on placing more control over how AI technology is used and regulated.

Learn more about AI cybersecurity

NIST’s tall order and what it entails

NIST, originally founded in 1901 as the National Bureau of Standards, has operated for over a century in the U.S. Department of Commerce with a mission to help promote higher standards when using science and technology to improve the security and quality of life for everyone.

In an effort to continuously improve on these initiatives, the Biden administration announced in June of last year that NIST would focus its efforts on a new project building off of the NIST AI Risk Management Framework to help address and regulate the rapid growth of AI technology.

As part of the new project, NIST will be extending its investigatory scope past just cybersecurity and address the global risks associated with the misuse of AI technology in society. This includes designing highly complex testing protocols to ensure the use of this new technology is both ethical and maintains the right level of security to avoid being misused.

One of the main subjects that NIST will be focusing on over the coming months is the rise of generative AI due to its fast adoption rate in business environments around the world. In support of this effort, NIST will be working with other organizations to develop new standards and best practices for the responsible development and use of it in commercial settings.

What challenges is NIST facing in the fight against AI-driven cyberattacks?

Although NIST has been viewed by many as playing a critical role in ensuring better security practices for all industries and sectors, the path in front of them hasn’t been easy to walk. NIST is currently facing major financial issues impacting its ability to see its mission through.

For several years now, the 123-year-old government building that houses NIST’s R&D department has been in a state of disrepair, with rain leaks and mold becoming a major issue. Budget constraints have been the main culprit associated with these issues and new suggested government plans would show a further 10% reduction in the organization’s funding.

Considering the ambitious plan instituted by the Biden administration, the future of this initiative looks like it could be in jeopardy without certain forms of intervention. Insufficient funding will restrict the scope and scale that NIST can undertake and may lead to delays when introducing new essential security tools and guidelines.

With AI technology increasing its momentum, NIST is at a crossroads when it comes to finding the necessary support it will need to keep pace with the growing threat of highly advanced security threats.

NIST looking forward

Addressing NIST’s funding issue is one of the most important challenges the organization is facing right now.

Increased financial support — either through federal funding initiatives or through public and private partnerships — can play a big part in opening the doors to bringing in more qualified talent to assist. This includes the ability to work with top security researchers and engineers who can help accelerate the discovery and mitigation of specific AI risks.

In addition to receiving funding, NIST will benefit greatly by joining collaborative forces with other industry leaders in the security and technology sector. Many organizations like Google and Amazon have been leaders in AI technology adoption and have funded their own security initiatives surrounding its use.

While the long-term success of NIST’s new mission will no doubt depend on improving its current funding situation, we may start to see some significant improvements in how AI technology is safely used across various industries as more organizations recognize the importance of NIST’s work and lend their support.

More from Artificial Intelligence

How red teaming helps safeguard the infrastructure behind AI models

4 min read - Artificial intelligence (AI) is now squarely on the frontlines of information security. However, as is often the case when the pace of technological innovation is very rapid, security often ends up being a secondary consideration. This is increasingly evident from the ad-hoc nature of many implementations, where organizations lack a clear strategy for responsible AI use.Attack surfaces aren’t just expanding due to risks and vulnerabilities in AI models themselves but also in the underlying infrastructure that supports them. Many foundation…

The straight and narrow — How to keep ML and AI training on track

3 min read - Artificial intelligence (AI) and machine learning (ML) have entered the enterprise environment.According to the IBM AI in Action 2024 Report, two broad groups are onboarding AI: Leaders and learners. Leaders are seeing quantifiable results, with two-thirds reporting 25% (or greater) boosts to revenue growth. Learners, meanwhile, say they're following an AI roadmap (72%), but just 40% say their C-suite fully understands the value of AI investment.One thing they have in common? Challenges with data security. Despite their success with AI…

Will AI threaten the role of human creativity in cyber threat detection?

4 min read - Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today