Machine learning researcher Irina Nicolae is here to dispel a common misconception: You don’t have to be a math whiz to end up working in technology.

Growing up in Bucharest, Romania, Irina had relatively little interest in numerics. She was, however, captivated by machinery and how different parts fit together to perform a task. It was this fascination that eventually led her to programming.

Today, Irina is turning her longtime passion into action in her role as a research scientist at IBM Research – Ireland. She is studying one of today’s most pressing cybersecurity problems: What can be done to keep artificial intelligence (AI) safe from attacks? (And she still gets excited to see the models at work.)

Turning Theoretical Concepts Into Practical Applications

Although Irina only graduated five years ago, she has found herself at the forefront of IBM’s efforts to battle adversarial AI threats. After studying computer science and engineering in her native Bucharest and at the National School of Computer Science and Applied Mathematics in France, she joined the IBM Research team in Dublin to dive headfirst into the most cutting-edge security technology.

Her personal interests range from adversarial AI to Mahalanobis distance and Rademacher complexity (which she researched for her Ph.D.). So, it’s not surprising to hear her say she would have stayed in academia had she not brought her research skills to the corporate world.

At IBM, Irina gets to see her research applied to real-world technology — and she loves that her work is guided by practical applications instead of theoretics.

“To me, it’s the relevance to the modern world,” she said of her role. “On the one hand, it’s a very interesting research problem because we don’t have the full answer. The problem itself has some very interesting properties that make it challenging and fun to analyze.

“On the other hand, to me, it has huge practical impact because, so far, we haven’t seen so many AIs out there — but we’re seeing more and more of them today. As soon as more decision processes are based on these AIs, of course, people are going to try to attack them for profit.”

AI Research: The Importance of Vulnerabilities

For Irina, researching the vulnerabilities in AI and machine learning is crucial. To demonstrate why, she raised the example of neural networks.

“We’ve known about neural networks for the last 30 years, but they were forgotten for a while by the community because they weren’t performing well enough, and have only regained traction in recent years,” Irina explained. “Now, imagine if we couldn’t use AI and deep learning in applications because of security vulnerabilities — if people said this technology has amazing performance, but it’s unreliable because it can always be attacked. To me, there’s this risk of AI, deep learning and machine learning being forgotten again by the community because they are unreliable or, even worse, being used in spite of the risks.”

That’s why Irina is in Dublin, working within a team of five to probe vulnerabilities in AI and machine learning so that we can all use it safely. The same security concerns that affect any other computer-based system also apply to AI, Irina said.

To protect against these threats, security teams need insights specific to the medium at hand. While Irina said this is a “very active research field,” she also noted that researchers have thus far been more successful in attacking AI to exploit vulnerabilities rather than defend them effectively.

Building Defenses Against the Unknown

The next step is building defenses.

“The problem currently is none of the existing defense methods actually solve the problem. All of them are partial answers. Most will only work in certain conditions, against certain attacks, only if the attack is not too strong, only if the attacker doesn’t have full access to the system, etc.” Irina explained. “What we’re looking into is to solve the problem of what would be a good defense for AI against all types of attacks. We want to remove the vulnerabilities that we’re aware of and build a defense against the still-unknown ones.”

Naturally, Irina wants to see AI and machine learning succeed so they can become a bigger part of our daily lives and free security teams to focus on more pressing tasks and big-picture strategies. It plays to her long-time interest in machinery and how it’s all put together.

As she continues her research, Irina gets to indulge her love of complex problems and take satisfaction in the fact that was once a childhood fascination is today helping make the modern world a safer place to live.

Meet Penetration Testing Expert Dimitry Snezhkov

More from Artificial Intelligence

How prepared are you for your first Gen AI disruption?

5 min read - Generative artificial intelligence (Gen AI) and its use by businesses to enhance operations and profits are the focus of innovation in virtually every sector and industry. Gartner predicts that global spending on AI software will surge from $124 billion in 2022 to $297 billion by 2027. Businesses are upskilling their teams and hiring costly experts to implement new use cases, new ways to leverage data and new ways to use open-source tooling and resources. What they have failed to look…

Brands are changing cybersecurity strategies due to AI threats

3 min read -  Over the past 18 months, AI has changed how we do many things in our work and professional lives — from helping us write emails to affecting how we approach cybersecurity. A recent Voice of SecOps 2024 study found that AI was a huge reason for many shifts in cybersecurity over the past 12 months. Interestingly, AI was both the cause of new issues as well as quickly becoming a common solution for those very same challenges.The study was conducted…

Does your business have an AI blind spot? Navigating the risks of shadow AI

4 min read - With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today