Machine learning researcher Irina Nicolae is here to dispel a common misconception: You don’t have to be a math whiz to end up working in technology.

Growing up in Bucharest, Romania, Irina had relatively little interest in numerics. She was, however, captivated by machinery and how different parts fit together to perform a task. It was this fascination that eventually led her to programming.

Today, Irina is turning her longtime passion into action in her role as a research scientist at IBM Research – Ireland. She is studying one of today’s most pressing cybersecurity problems: What can be done to keep artificial intelligence (AI) safe from attacks? (And she still gets excited to see the models at work.)

Turning Theoretical Concepts Into Practical Applications

Although Irina only graduated five years ago, she has found herself at the forefront of IBM’s efforts to battle adversarial AI threats. After studying computer science and engineering in her native Bucharest and at the National School of Computer Science and Applied Mathematics in France, she joined the IBM Research team in Dublin to dive headfirst into the most cutting-edge security technology.

Her personal interests range from adversarial AI to Mahalanobis distance and Rademacher complexity (which she researched for her Ph.D.). So, it’s not surprising to hear her say she would have stayed in academia had she not brought her research skills to the corporate world.

At IBM, Irina gets to see her research applied to real-world technology — and she loves that her work is guided by practical applications instead of theoretics.

“To me, it’s the relevance to the modern world,” she said of her role. “On the one hand, it’s a very interesting research problem because we don’t have the full answer. The problem itself has some very interesting properties that make it challenging and fun to analyze.

“On the other hand, to me, it has huge practical impact because, so far, we haven’t seen so many AIs out there — but we’re seeing more and more of them today. As soon as more decision processes are based on these AIs, of course, people are going to try to attack them for profit.”

AI Research: The Importance of Vulnerabilities

For Irina, researching the vulnerabilities in AI and machine learning is crucial. To demonstrate why, she raised the example of neural networks.

“We’ve known about neural networks for the last 30 years, but they were forgotten for a while by the community because they weren’t performing well enough, and have only regained traction in recent years,” Irina explained. “Now, imagine if we couldn’t use AI and deep learning in applications because of security vulnerabilities — if people said this technology has amazing performance, but it’s unreliable because it can always be attacked. To me, there’s this risk of AI, deep learning and machine learning being forgotten again by the community because they are unreliable or, even worse, being used in spite of the risks.”

That’s why Irina is in Dublin, working within a team of five to probe vulnerabilities in AI and machine learning so that we can all use it safely. The same security concerns that affect any other computer-based system also apply to AI, Irina said.

To protect against these threats, security teams need insights specific to the medium at hand. While Irina said this is a “very active research field,” she also noted that researchers have thus far been more successful in attacking AI to exploit vulnerabilities rather than defend them effectively.

Building Defenses Against the Unknown

The next step is building defenses.

“The problem currently is none of the existing defense methods actually solve the problem. All of them are partial answers. Most will only work in certain conditions, against certain attacks, only if the attack is not too strong, only if the attacker doesn’t have full access to the system, etc.” Irina explained. “What we’re looking into is to solve the problem of what would be a good defense for AI against all types of attacks. We want to remove the vulnerabilities that we’re aware of and build a defense against the still-unknown ones.”

Naturally, Irina wants to see AI and machine learning succeed so they can become a bigger part of our daily lives and free security teams to focus on more pressing tasks and big-picture strategies. It plays to her long-time interest in machinery and how it’s all put together.

As she continues her research, Irina gets to indulge her love of complex problems and take satisfaction in the fact that was once a childhood fascination is today helping make the modern world a safer place to live.

Meet Penetration Testing Expert Dimitry Snezhkov

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today