What does a master IBM inventor who typically models brain activity have to do with enterprise security? If you ask James Kozloski, you won’t get a quick answer, but it will definitely be an interesting one.

Kozloski, who is a manager of computational neuroscience and multiscale brain modeling for IBM Research, is always coming up with new ideas. He was recently part of a team of IBMers that received a security patent for a cognitive honeypot — or, in patent parlance, “an electronic communication evaluating device [that] determines a suspicion level for an initial electronic communication.” That’s a lot of jargon, so let’s break down this clever invention step by step.

What Makes Humans Tick?

Most of us know what honeypots are. The concept of trying to trap malware authors by simulating an unsuspecting user who happens upon an infected site goes back at least a decade, if not longer. Microsoft and Google have used honeypots and honeynets in this fashion for years, and they have been effective at locating new malware techniques. Open-source efforts such as the German Honeynet Project have also been useful in helping security professionals develop new honeypots.

Much of Kozloski’s background is in computational biology, where he uses high-performance computing clusters to simulate various neural components and ultimately build models to illustrate how the brain works. To that end, his team seeks to understand how the brain fails — specifically, how the failure of certain parts of the brain affect individuals suffering from various diseases. For example, the team has worked to model Huntington’s disease, a pernicious malady in which brain cells degenerate over time.

Several years ago, Kozloski was standing by his office printing station when he happened to engage another IBM employee, Clifford Pickover, in a discussion about reducing the wait time for their print jobs. That casual conversation inspired Kozloski to launch tireless quest to understand what makes people tick.

The Cognitive Honeypot: A Sweet Solution to Spear Phishing

So what does any of this have to do with enterprise security? Security professionals must deal with the ever-present threat of spear phishing, in which a single spam message, if opened by an unsuspecting user, can infect the entire corporate network. For cybercriminals, this type of attack is a numbers game: If they send a sufficient volume of fraudulent emails, at least a few users are bound to open their malicious contents eventually.

But what if you could develop a honeypot to mimic a clueless user and respond to a spammer with the kind of email that would suggest that the spear phishing attempt succeeded? Better yet, what if you could overwhelm the spammer with hundreds of these false positive messages, thereby forcing him or her to spend valuable time distinguishing between actual human responses and those generated by automated bots? Turnabout is fair play, after all.

That’s exactly what Kozloski had in mind with his invention.

“The trick is doing this in such a way that it isn’t distinguishable from a human subject’s response,” he said. “For example, it could mimic an elderly user who is responding to an email about winning a lottery or someone supposedly in trouble overseas with appropriate human responses.” The genius of the idea is that it consumes the attacker’s most critical resource: time.

The honeypot project is an active area of study for IBM Research, and while the patent has yet to produce an actual product, one could be just around the corner.

Changing the World, One Security Patent at a Time

Kozloski has been at IBM since 2001. Since then, he has written 10 papers and contributed to more than 100 patents. He is a member of a small group of several dozen master inventors at IBM, including Lisa Secat DeLuca, the most prolific female inventor in IBM’s history. Last year, IBM inventors were granted more than 9,000 U.S. patents, leading the way in that category for the 25th consecutive year.

Master inventors typically serve for a three-year term before being evaluated to potentially serve an additional term. As part of his work, Kozloski leads regular workshops to teach other IBM employees to collaborate and come up with new inventions.

But does the master inventor himself feel any day-to-day pressure to dream up new ideas of his own?

“It is a cool title, to be sure,” he said, “but it’s more about the work with my team and being recognized for doing something innovative.” Kozloski’s team stretches around the globe, including colleagues in Israel and Hungary who helped formulate the honeypot idea. “Being a single inventor is hard, but when you’re a part of a team you can leverage each other’s skills and interests and be more productive,” he said.

It’s like that old proverb: “If you want to go quickly, go alone. If you want to go far, go together.”

More from Artificial Intelligence

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Unmasking hypnotized AI: The hidden risks of large language models

11 min read - The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it's important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…

Artificial intelligence threats in identity management

4 min read - The 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures: 68% are concerned about insider threats from employee layoffs and churn 99% expect some type of identity compromise driven by financial cutbacks, geopolitical factors, cloud applications and hybrid work environments 74% are concerned about confidential data loss through employees, ex-employees and third-party vendors. Additionally, many feel digital identity proliferation is on the rise and the attack surface is…

AI reduces data breach lifecycles and costs

3 min read - The cybersecurity tools you implement can make a difference in the financial future of your business. According to the 2023 IBM Cost of a Data Breach report, organizations using security AI and automation incurred fewer data breach costs compared to businesses not using AI-based cybersecurity tools. The report found that the more an organization uses the tools, the greater the benefits reaped. Organizations that extensively used AI and security automation saw an average cost of a data breach of $3.60…