Machine learning researcher Irina Nicolae is here to dispel a common misconception: You don’t have to be a math whiz to end up working in technology.

Growing up in Bucharest, Romania, Irina had relatively little interest in numerics. She was, however, captivated by machinery and how different parts fit together to perform a task. It was this fascination that eventually led her to programming.

Today, Irina is turning her longtime passion into action in her role as a research scientist at IBM Research – Ireland. She is studying one of today’s most pressing cybersecurity problems: What can be done to keep artificial intelligence (AI) safe from attacks? (And she still gets excited to see the models at work.)

Turning Theoretical Concepts Into Practical Applications

Although Irina only graduated five years ago, she has found herself at the forefront of IBM’s efforts to battle adversarial AI threats. After studying computer science and engineering in her native Bucharest and at the National School of Computer Science and Applied Mathematics in France, she joined the IBM Research team in Dublin to dive headfirst into the most cutting-edge security technology.

Her personal interests range from adversarial AI to Mahalanobis distance and Rademacher complexity (which she researched for her Ph.D.). So, it’s not surprising to hear her say she would have stayed in academia had she not brought her research skills to the corporate world.

At IBM, Irina gets to see her research applied to real-world technology — and she loves that her work is guided by practical applications instead of theoretics.

“To me, it’s the relevance to the modern world,” she said of her role. “On the one hand, it’s a very interesting research problem because we don’t have the full answer. The problem itself has some very interesting properties that make it challenging and fun to analyze.

“On the other hand, to me, it has huge practical impact because, so far, we haven’t seen so many AIs out there — but we’re seeing more and more of them today. As soon as more decision processes are based on these AIs, of course, people are going to try to attack them for profit.”

AI Research: The Importance of Vulnerabilities

For Irina, researching the vulnerabilities in AI and machine learning is crucial. To demonstrate why, she raised the example of neural networks.

“We’ve known about neural networks for the last 30 years, but they were forgotten for a while by the community because they weren’t performing well enough, and have only regained traction in recent years,” Irina explained. “Now, imagine if we couldn’t use AI and deep learning in applications because of security vulnerabilities — if people said this technology has amazing performance, but it’s unreliable because it can always be attacked. To me, there’s this risk of AI, deep learning and machine learning being forgotten again by the community because they are unreliable or, even worse, being used in spite of the risks.”

That’s why Irina is in Dublin, working within a team of five to probe vulnerabilities in AI and machine learning so that we can all use it safely. The same security concerns that affect any other computer-based system also apply to AI, Irina said.

To protect against these threats, security teams need insights specific to the medium at hand. While Irina said this is a “very active research field,” she also noted that researchers have thus far been more successful in attacking AI to exploit vulnerabilities rather than defend them effectively.

Building Defenses Against the Unknown

The next step is building defenses.

“The problem currently is none of the existing defense methods actually solve the problem. All of them are partial answers. Most will only work in certain conditions, against certain attacks, only if the attack is not too strong, only if the attacker doesn’t have full access to the system, etc.” Irina explained. “What we’re looking into is to solve the problem of what would be a good defense for AI against all types of attacks. We want to remove the vulnerabilities that we’re aware of and build a defense against the still-unknown ones.”

Naturally, Irina wants to see AI and machine learning succeed so they can become a bigger part of our daily lives and free security teams to focus on more pressing tasks and big-picture strategies. It plays to her long-time interest in machinery and how it’s all put together.

As she continues her research, Irina gets to indulge her love of complex problems and take satisfaction in the fact that was once a childhood fascination is today helping make the modern world a safer place to live.

Meet Penetration Testing Expert Dimitry Snezhkov

More from Artificial Intelligence

Generative AI security requires a solid framework

4 min read - How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.CISA Director Jen…

Self-replicating Morris II worm targets AI email assistants

4 min read - The proliferation of generative artificial intelligence (gen AI) email assistants such as OpenAI’s GPT-3 and Google’s Smart Compose has revolutionized communication workflows. Unfortunately, it has also introduced novel attack vectors for cyber criminals. Leveraging recent advancements in AI and natural language processing, malicious actors can exploit vulnerabilities in gen AI systems to orchestrate sophisticated cyberattacks with far-reaching consequences. Recent studies have uncovered the insidious capabilities of self-replicating malware, exemplified by the “Morris II” strain created by researchers. How the Morris…

Open source, open risks: The growing dangers of unregulated generative AI

3 min read - While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today