April 25, 2024 By Josh Nadeau 4 min read

Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.

However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success of this new government-mandated project will depend on NIST’s ability to overcome these unique challenges while relying on strong partnerships and new business funding initiatives.

The growing concern about AI-powered cyber threats

AI’s notable entry into the business world and our personal lives has been met with great positivity regarding its potential application. Now, more and more organizations are adopting technology to inject new levels of efficiency and automation into their operations.

However, a much darker side of this new disruptive technology has continued to escalate in severity over the years. AI technology has become a core component of many modern cybersecurity threats, allowing for highly adaptable and effective methods for orchestrating attacks.

The introduction of newer technologies like AI into cyber criminals’ arsenal has led to statistics projecting that cyberattack damages will be expected to reach $10.5 trillion annually by 2025. Much of this growing trend can be attributed to the sheer scale of AI-driven attacks now taking place.

Unlike traditional attack methods that heavily relied on human intervention when planning and executing various attack vectors, AI technology has allowed cyber criminals to operate on a much more autonomous and anonymous scale. This includes deploying automated vulnerability discovery software that leads to quicker development of zero-day exploits and successful malware injections.

Deepfake threats are another product of AI technology that can have significant political and economic consequences. Manipulated audio and video recordings that are becoming more believable every day can lead to several security issues and can even play a role in steering political elections or compromising critical infrastructure.

These growing concerns have given governments a higher priority in prioritizing new initiatives focused on placing more control over how AI technology is used and regulated.

Learn more about AI cybersecurity

NIST’s tall order and what it entails

NIST, originally founded in 1901 as the National Bureau of Standards, has operated for over a century in the U.S. Department of Commerce with a mission to help promote higher standards when using science and technology to improve the security and quality of life for everyone.

In an effort to continuously improve on these initiatives, the Biden administration announced in June of last year that NIST would focus its efforts on a new project building off of the NIST AI Risk Management Framework to help address and regulate the rapid growth of AI technology.

As part of the new project, NIST will be extending its investigatory scope past just cybersecurity and address the global risks associated with the misuse of AI technology in society. This includes designing highly complex testing protocols to ensure the use of this new technology is both ethical and maintains the right level of security to avoid being misused.

One of the main subjects that NIST will be focusing on over the coming months is the rise of generative AI due to its fast adoption rate in business environments around the world. In support of this effort, NIST will be working with other organizations to develop new standards and best practices for the responsible development and use of it in commercial settings.

What challenges is NIST facing in the fight against AI-driven cyberattacks?

Although NIST has been viewed by many as playing a critical role in ensuring better security practices for all industries and sectors, the path in front of them hasn’t been easy to walk. NIST is currently facing major financial issues impacting its ability to see its mission through.

For several years now, the 123-year-old government building that houses NIST’s R&D department has been in a state of disrepair, with rain leaks and mold becoming a major issue. Budget constraints have been the main culprit associated with these issues and new suggested government plans would show a further 10% reduction in the organization’s funding.

Considering the ambitious plan instituted by the Biden administration, the future of this initiative looks like it could be in jeopardy without certain forms of intervention. Insufficient funding will restrict the scope and scale that NIST can undertake and may lead to delays when introducing new essential security tools and guidelines.

With AI technology increasing its momentum, NIST is at a crossroads when it comes to finding the necessary support it will need to keep pace with the growing threat of highly advanced security threats.

NIST looking forward

Addressing NIST’s funding issue is one of the most important challenges the organization is facing right now.

Increased financial support — either through federal funding initiatives or through public and private partnerships — can play a big part in opening the doors to bringing in more qualified talent to assist. This includes the ability to work with top security researchers and engineers who can help accelerate the discovery and mitigation of specific AI risks.

In addition to receiving funding, NIST will benefit greatly by joining collaborative forces with other industry leaders in the security and technology sector. Many organizations like Google and Amazon have been leaders in AI technology adoption and have funded their own security initiatives surrounding its use.

While the long-term success of NIST’s new mission will no doubt depend on improving its current funding situation, we may start to see some significant improvements in how AI technology is safely used across various industries as more organizations recognize the importance of NIST’s work and lend their support.

More from Artificial Intelligence

Brands are changing cybersecurity strategies due to AI threats

3 min read -  Over the past 18 months, AI has changed how we do many things in our work and professional lives — from helping us write emails to affecting how we approach cybersecurity. A recent Voice of SecOps 2024 study found that AI was a huge reason for many shifts in cybersecurity over the past 12 months. Interestingly, AI was both the cause of new issues as well as quickly becoming a common solution for those very same challenges.The study was conducted…

Does your business have an AI blind spot? Navigating the risks of shadow AI

4 min read - With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools…

ChatGPT 4 can exploit 87% of one-day vulnerabilities

3 min read - Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to determine the answer. The conclusion: They are very effective. ChatGPT 4 quickly exploited one-day vulnerabilities During the study, the team used 15 one-day vulnerabilities that…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today