Machine learning researcher Irina Nicolae is here to dispel a common misconception: You don’t have to be a math whiz to end up working in technology.

Growing up in Bucharest, Romania, Irina had relatively little interest in numerics. She was, however, captivated by machinery and how different parts fit together to perform a task. It was this fascination that eventually led her to programming.

Today, Irina is turning her longtime passion into action in her role as a research scientist at IBM Research – Ireland. She is studying one of today’s most pressing cybersecurity problems: What can be done to keep artificial intelligence (AI) safe from attacks? (And she still gets excited to see the models at work.)

Turning Theoretical Concepts Into Practical Applications

Although Irina only graduated five years ago, she has found herself at the forefront of IBM’s efforts to battle adversarial AI threats. After studying computer science and engineering in her native Bucharest and at the National School of Computer Science and Applied Mathematics in France, she joined the IBM Research team in Dublin to dive headfirst into the most cutting-edge security technology.

Her personal interests range from adversarial AI to Mahalanobis distance and Rademacher complexity (which she researched for her Ph.D.). So, it’s not surprising to hear her say she would have stayed in academia had she not brought her research skills to the corporate world.

At IBM, Irina gets to see her research applied to real-world technology — and she loves that her work is guided by practical applications instead of theoretics.

“To me, it’s the relevance to the modern world,” she said of her role. “On the one hand, it’s a very interesting research problem because we don’t have the full answer. The problem itself has some very interesting properties that make it challenging and fun to analyze.

“On the other hand, to me, it has huge practical impact because, so far, we haven’t seen so many AIs out there — but we’re seeing more and more of them today. As soon as more decision processes are based on these AIs, of course, people are going to try to attack them for profit.”

AI Research: The Importance of Vulnerabilities

For Irina, researching the vulnerabilities in AI and machine learning is crucial. To demonstrate why, she raised the example of neural networks.

“We’ve known about neural networks for the last 30 years, but they were forgotten for a while by the community because they weren’t performing well enough, and have only regained traction in recent years,” Irina explained. “Now, imagine if we couldn’t use AI and deep learning in applications because of security vulnerabilities — if people said this technology has amazing performance, but it’s unreliable because it can always be attacked. To me, there’s this risk of AI, deep learning and machine learning being forgotten again by the community because they are unreliable or, even worse, being used in spite of the risks.”

That’s why Irina is in Dublin, working within a team of five to probe vulnerabilities in AI and machine learning so that we can all use it safely. The same security concerns that affect any other computer-based system also apply to AI, Irina said.

To protect against these threats, security teams need insights specific to the medium at hand. While Irina said this is a “very active research field,” she also noted that researchers have thus far been more successful in attacking AI to exploit vulnerabilities rather than defend them effectively.

Building Defenses Against the Unknown

The next step is building defenses.

“The problem currently is none of the existing defense methods actually solve the problem. All of them are partial answers. Most will only work in certain conditions, against certain attacks, only if the attack is not too strong, only if the attacker doesn’t have full access to the system, etc.” Irina explained. “What we’re looking into is to solve the problem of what would be a good defense for AI against all types of attacks. We want to remove the vulnerabilities that we’re aware of and build a defense against the still-unknown ones.”

Naturally, Irina wants to see AI and machine learning succeed so they can become a bigger part of our daily lives and free security teams to focus on more pressing tasks and big-picture strategies. It plays to her long-time interest in machinery and how it’s all put together.

As she continues her research, Irina gets to indulge her love of complex problems and take satisfaction in the fact that was once a childhood fascination is today helping make the modern world a safer place to live.

Meet Penetration Testing Expert Dimitry Snezhkov

More from Artificial Intelligence

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today