What does a master IBM inventor who typically models brain activity have to do with enterprise security? If you ask James Kozloski, you won’t get a quick answer, but it will definitely be an interesting one.

Kozloski, who is a manager of computational neuroscience and multiscale brain modeling for IBM Research, is always coming up with new ideas. He was recently part of a team of IBMers that received a security patent for a cognitive honeypot — or, in patent parlance, “an electronic communication evaluating device [that] determines a suspicion level for an initial electronic communication.” That’s a lot of jargon, so let’s break down this clever invention step by step.

What Makes Humans Tick?

Most of us know what honeypots are. The concept of trying to trap malware authors by simulating an unsuspecting user who happens upon an infected site goes back at least a decade, if not longer. Microsoft and Google have used honeypots and honeynets in this fashion for years, and they have been effective at locating new malware techniques. Open-source efforts such as the German Honeynet Project have also been useful in helping security professionals develop new honeypots.

Much of Kozloski’s background is in computational biology, where he uses high-performance computing clusters to simulate various neural components and ultimately build models to illustrate how the brain works. To that end, his team seeks to understand how the brain fails — specifically, how the failure of certain parts of the brain affect individuals suffering from various diseases. For example, the team has worked to model Huntington’s disease, a pernicious malady in which brain cells degenerate over time.

Several years ago, Kozloski was standing by his office printing station when he happened to engage another IBM employee, Clifford Pickover, in a discussion about reducing the wait time for their print jobs. That casual conversation inspired Kozloski to launch tireless quest to understand what makes people tick.

The Cognitive Honeypot: A Sweet Solution to Spear Phishing

So what does any of this have to do with enterprise security? Security professionals must deal with the ever-present threat of spear phishing, in which a single spam message, if opened by an unsuspecting user, can infect the entire corporate network. For cybercriminals, this type of attack is a numbers game: If they send a sufficient volume of fraudulent emails, at least a few users are bound to open their malicious contents eventually.

But what if you could develop a honeypot to mimic a clueless user and respond to a spammer with the kind of email that would suggest that the spear phishing attempt succeeded? Better yet, what if you could overwhelm the spammer with hundreds of these false positive messages, thereby forcing him or her to spend valuable time distinguishing between actual human responses and those generated by automated bots? Turnabout is fair play, after all.

That’s exactly what Kozloski had in mind with his invention.

“The trick is doing this in such a way that it isn’t distinguishable from a human subject’s response,” he said. “For example, it could mimic an elderly user who is responding to an email about winning a lottery or someone supposedly in trouble overseas with appropriate human responses.” The genius of the idea is that it consumes the attacker’s most critical resource: time.

The honeypot project is an active area of study for IBM Research, and while the patent has yet to produce an actual product, one could be just around the corner.

Changing the World, One Security Patent at a Time

Kozloski has been at IBM since 2001. Since then, he has written 10 papers and contributed to more than 100 patents. He is a member of a small group of several dozen master inventors at IBM, including Lisa Secat DeLuca, the most prolific female inventor in IBM’s history. Last year, IBM inventors were granted more than 9,000 U.S. patents, leading the way in that category for the 25th consecutive year.

Master inventors typically serve for a three-year term before being evaluated to potentially serve an additional term. As part of his work, Kozloski leads regular workshops to teach other IBM employees to collaborate and come up with new inventions.

But does the master inventor himself feel any day-to-day pressure to dream up new ideas of his own?

“It is a cool title, to be sure,” he said, “but it’s more about the work with my team and being recognized for doing something innovative.” Kozloski’s team stretches around the globe, including colleagues in Israel and Hungary who helped formulate the honeypot idea. “Being a single inventor is hard, but when you’re a part of a team you can leverage each other’s skills and interests and be more productive,” he said.

It’s like that old proverb: “If you want to go quickly, go alone. If you want to go far, go together.”

More from Artificial Intelligence

Generative AI security requires a solid framework

4 min read - How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.CISA Director Jen…

Self-replicating Morris II worm targets AI email assistants

4 min read - The proliferation of generative artificial intelligence (gen AI) email assistants such as OpenAI’s GPT-3 and Google’s Smart Compose has revolutionized communication workflows. Unfortunately, it has also introduced novel attack vectors for cyber criminals. Leveraging recent advancements in AI and natural language processing, malicious actors can exploit vulnerabilities in gen AI systems to orchestrate sophisticated cyberattacks with far-reaching consequences. Recent studies have uncovered the insidious capabilities of self-replicating malware, exemplified by the “Morris II” strain created by researchers. How the Morris…

Open source, open risks: The growing dangers of unregulated generative AI

3 min read - While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today