What does a master IBM inventor who typically models brain activity have to do with enterprise security? If you ask James Kozloski, you won’t get a quick answer, but it will definitely be an interesting one.

Kozloski, who is a manager of computational neuroscience and multiscale brain modeling for IBM Research, is always coming up with new ideas. He was recently part of a team of IBMers that received a security patent for a cognitive honeypot — or, in patent parlance, “an electronic communication evaluating device [that] determines a suspicion level for an initial electronic communication.” That’s a lot of jargon, so let’s break down this clever invention step by step.

What Makes Humans Tick?

Most of us know what honeypots are. The concept of trying to trap malware authors by simulating an unsuspecting user who happens upon an infected site goes back at least a decade, if not longer. Microsoft and Google have used honeypots and honeynets in this fashion for years, and they have been effective at locating new malware techniques. Open-source efforts such as the German Honeynet Project have also been useful in helping security professionals develop new honeypots.

Much of Kozloski’s background is in computational biology, where he uses high-performance computing clusters to simulate various neural components and ultimately build models to illustrate how the brain works. To that end, his team seeks to understand how the brain fails — specifically, how the failure of certain parts of the brain affect individuals suffering from various diseases. For example, the team has worked to model Huntington’s disease, a pernicious malady in which brain cells degenerate over time.

Several years ago, Kozloski was standing by his office printing station when he happened to engage another IBM employee, Clifford Pickover, in a discussion about reducing the wait time for their print jobs. That casual conversation inspired Kozloski to launch tireless quest to understand what makes people tick.

The Cognitive Honeypot: A Sweet Solution to Spear Phishing

So what does any of this have to do with enterprise security? Security professionals must deal with the ever-present threat of spear phishing, in which a single spam message, if opened by an unsuspecting user, can infect the entire corporate network. For cybercriminals, this type of attack is a numbers game: If they send a sufficient volume of fraudulent emails, at least a few users are bound to open their malicious contents eventually.

But what if you could develop a honeypot to mimic a clueless user and respond to a spammer with the kind of email that would suggest that the spear phishing attempt succeeded? Better yet, what if you could overwhelm the spammer with hundreds of these false positive messages, thereby forcing him or her to spend valuable time distinguishing between actual human responses and those generated by automated bots? Turnabout is fair play, after all.

That’s exactly what Kozloski had in mind with his invention.

“The trick is doing this in such a way that it isn’t distinguishable from a human subject’s response,” he said. “For example, it could mimic an elderly user who is responding to an email about winning a lottery or someone supposedly in trouble overseas with appropriate human responses.” The genius of the idea is that it consumes the attacker’s most critical resource: time.

The honeypot project is an active area of study for IBM Research, and while the patent has yet to produce an actual product, one could be just around the corner.

Changing the World, One Security Patent at a Time

Kozloski has been at IBM since 2001. Since then, he has written 10 papers and contributed to more than 100 patents. He is a member of a small group of several dozen master inventors at IBM, including Lisa Secat DeLuca, the most prolific female inventor in IBM’s history. Last year, IBM inventors were granted more than 9,000 U.S. patents, leading the way in that category for the 25th consecutive year.

Master inventors typically serve for a three-year term before being evaluated to potentially serve an additional term. As part of his work, Kozloski leads regular workshops to teach other IBM employees to collaborate and come up with new inventions.

But does the master inventor himself feel any day-to-day pressure to dream up new ideas of his own?

“It is a cool title, to be sure,” he said, “but it’s more about the work with my team and being recognized for doing something innovative.” Kozloski’s team stretches around the globe, including colleagues in Israel and Hungary who helped formulate the honeypot idea. “Being a single inventor is hard, but when you’re a part of a team you can leverage each other’s skills and interests and be more productive,” he said.

It’s like that old proverb: “If you want to go quickly, go alone. If you want to go far, go together.”

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today