April 25, 2024 By Josh Nadeau 4 min read

Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.

However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success of this new government-mandated project will depend on NIST’s ability to overcome these unique challenges while relying on strong partnerships and new business funding initiatives.

The growing concern about AI-powered cyber threats

AI’s notable entry into the business world and our personal lives has been met with great positivity regarding its potential application. Now, more and more organizations are adopting technology to inject new levels of efficiency and automation into their operations.

However, a much darker side of this new disruptive technology has continued to escalate in severity over the years. AI technology has become a core component of many modern cybersecurity threats, allowing for highly adaptable and effective methods for orchestrating attacks.

The introduction of newer technologies like AI into cyber criminals’ arsenal has led to statistics projecting that cyberattack damages will be expected to reach $10.5 trillion annually by 2025. Much of this growing trend can be attributed to the sheer scale of AI-driven attacks now taking place.

Unlike traditional attack methods that heavily relied on human intervention when planning and executing various attack vectors, AI technology has allowed cyber criminals to operate on a much more autonomous and anonymous scale. This includes deploying automated vulnerability discovery software that leads to quicker development of zero-day exploits and successful malware injections.

Deepfake threats are another product of AI technology that can have significant political and economic consequences. Manipulated audio and video recordings that are becoming more believable every day can lead to several security issues and can even play a role in steering political elections or compromising critical infrastructure.

These growing concerns have given governments a higher priority in prioritizing new initiatives focused on placing more control over how AI technology is used and regulated.

Learn more about AI cybersecurity

NIST’s tall order and what it entails

NIST, originally founded in 1901 as the National Bureau of Standards, has operated for over a century in the U.S. Department of Commerce with a mission to help promote higher standards when using science and technology to improve the security and quality of life for everyone.

In an effort to continuously improve on these initiatives, the Biden administration announced in June of last year that NIST would focus its efforts on a new project building off of the NIST AI Risk Management Framework to help address and regulate the rapid growth of AI technology.

As part of the new project, NIST will be extending its investigatory scope past just cybersecurity and address the global risks associated with the misuse of AI technology in society. This includes designing highly complex testing protocols to ensure the use of this new technology is both ethical and maintains the right level of security to avoid being misused.

One of the main subjects that NIST will be focusing on over the coming months is the rise of generative AI due to its fast adoption rate in business environments around the world. In support of this effort, NIST will be working with other organizations to develop new standards and best practices for the responsible development and use of it in commercial settings.

What challenges is NIST facing in the fight against AI-driven cyberattacks?

Although NIST has been viewed by many as playing a critical role in ensuring better security practices for all industries and sectors, the path in front of them hasn’t been easy to walk. NIST is currently facing major financial issues impacting its ability to see its mission through.

For several years now, the 123-year-old government building that houses NIST’s R&D department has been in a state of disrepair, with rain leaks and mold becoming a major issue. Budget constraints have been the main culprit associated with these issues and new suggested government plans would show a further 10% reduction in the organization’s funding.

Considering the ambitious plan instituted by the Biden administration, the future of this initiative looks like it could be in jeopardy without certain forms of intervention. Insufficient funding will restrict the scope and scale that NIST can undertake and may lead to delays when introducing new essential security tools and guidelines.

With AI technology increasing its momentum, NIST is at a crossroads when it comes to finding the necessary support it will need to keep pace with the growing threat of highly advanced security threats.

NIST looking forward

Addressing NIST’s funding issue is one of the most important challenges the organization is facing right now.

Increased financial support — either through federal funding initiatives or through public and private partnerships — can play a big part in opening the doors to bringing in more qualified talent to assist. This includes the ability to work with top security researchers and engineers who can help accelerate the discovery and mitigation of specific AI risks.

In addition to receiving funding, NIST will benefit greatly by joining collaborative forces with other industry leaders in the security and technology sector. Many organizations like Google and Amazon have been leaders in AI technology adoption and have funded their own security initiatives surrounding its use.

While the long-term success of NIST’s new mission will no doubt depend on improving its current funding situation, we may start to see some significant improvements in how AI technology is safely used across various industries as more organizations recognize the importance of NIST’s work and lend their support.

More from Artificial Intelligence

AI cybersecurity solutions detect ransomware in under 60 seconds

2 min read - Worried about ransomware? If so, it’s not surprising. According to the World Economic Forum, for large cyber losses (€1 million+), the number of cases in which data is exfiltrated is increasing, doubling from 40% in 2019 to almost 80% in 2022. And more recent activity is tracking even higher.Meanwhile, other dangers are appearing on the horizon. For example, the 2024 IBM X-Force Threat Intelligence Index states that threat group investment is increasingly focused on generative AI attack tools.Criminals have been…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today