Today, IBM’s Diana Kelley has a special Q&A on attribution with Dr. Char Sample, a research scientist at Carnegie Mellon University. Sample is a 20-year veteran of IT security and software engineering and has a deep expertise in firewalls, Domain Name System Security Extensions and secure network architectures. She is also recognized as the leading expert in quantitative cultural cyberthreat intelligence.
Question: Hello, Char! So glad you’re here to share your expertise with the SecurityIntelligence.com audience. During the aftermath of the 2014 Sony attack, there were a lot of articles about attribution. So, to get us started, I was wondering if you could please explain a little bit about what attribution means.
Answer: Hello, Diana, and thank you for inviting me to this Q&A. Attribution is another one of those terms that is frequently used but not precisely defined. When I began research in this area, I used the definition of “determining the source identity or location of the intruder.” This definition was paraphrased from “Techniques for Cyber Attack Attribution,” a 2003 paper by David A. Wheeler and Gregory N. Larsen, but seems to still apply. However, even this definition lacks precision, so once again, everyone will have a different definition.
So, it’s essentially the cybersecurity equivalent of whodunit? It’s probably clear to many readers why companies would like to know whodunit for legal response purposes, but how can that knowledge help us respond more effectively from a cyber perspective?
There are many reasons for getting the correct whodunit. The legal aspect is probably the most obvious; however, as the recent Sony attack and subsequent response has shown, when countries get involved, the importance of accurate attribution increases dramatically. Unfortunately, as we all know, attribution to an IP address is a nice start but is nowhere near enough. If we can correctly attribute our security events, we can first customize our defenses, and we can do a better job anticipating the intruder’s next steps rather than simply react to the events.
That’s fascinating. So, there’s an element of predictive preparedness involved. Going back to the postmortem on Sony, there was a lot of discussion about how tricky attribution can be. Can you explain why?
Presently, the only way we can accurately attribute attacks is to have direct access to the attacker’s computer. As we all know, attackers launch attacks off of devices that are several layers deep, involving both owned hosts and proxy servers translating addresses. Because of these well-known hiding methods, new approaches that take attribution beyond the IP address are now being considered; applying cross-discipline research to big data can fuel this. I know I just hit on two buzzwords in the same sentence, so I’ll take a moment to explain.
There are several areas of cross-discipline research in terms of cyberbehavior. One of the oldest is the use of linguistics with cyberbehaviors. One problem with this is that the sharing nature of attacks makes this method less than 100 percent. Another area of cross-discipline research combines economics, politics and cyberevents. Finally, there’s the thought of using behavioral models in attribution. This is the area where I am focused. Knowing that everyone has thought habits and these habits are hard, if not impossible, to break, why not learn what these habits mean to cyberevents?
Especially since much of our interactions take place faster than we can process, we are already using the automatic thought process. My focus on culture is different because I not only combine social sciences with cyberevents, but I also combine statistical analysis, so my work is quantitative in nature.
In your doctoral and postdoctoral research, you’ve assessed how Dr. Hofstede’s six cultural dimensions may hold a key to effective attack and malware attribution. What are the cultural dimensions, and what drew you to Hofstede’s work?
Actually, another researcher introduced me to Hofstede’s work. Dr. Dominik Güss has been researching how culture influences decision-making, and while I suspected culture would influence cyberbehaviors, I had no framework in mind. Hofstede’s framework appealed because it was easy to understand and the data is quantitative.
The quantitative nature of the data provides two advantages. It limits researcher biases when defining a country (I simply examine scores), and it allows for statistical analysis so that findings are more objectively interpreted.
The objective approach is intriguing because risk assessments can get very subjective. In your paper, “Cyber Espionage: A Cultural Expression,” you performed a pilot study to determine if there is a link between cyber espionage and culture. Can you give us a high-level overview of what you discovered?
We took a look at the Verizon data breaches report and basically looked to see what cultural characteristics the top attackers had in common. Now, I will caution that the data is not raw and not plentiful, so the fidelity of that data is a concern that we called out. However, one theme that this paper hit on and has been supported by other research done by researchers in various countries is that collectivism, which typically occurs with high power distance, tends to negatively impact creativity. Dr. Güss noticed this on his microworld decision-making experiments, and even Yu and Yang in China wrote a paper on how collectivism and the “golden mean” negatively impact creativity, especially in the technical fields. If creativity is not home-grown, the next choice is to either buy it or steal it.
So, not surprisingly, we discovered that collectivism and authoritarianism statistically correlated with higher rates of espionage. More recently, I have been looking at cyber and kinetic behaviors and the role of culture in that area. While the results are not yet published, there have been several interesting findings in terms of which countries have adopted cyber as a war domain and their usage patterns.
For anyone who’s interested in reading more about this, I maintain a repository of papers I’ve written on the subject.
Going forward, what can end-user companies or software vendors to do take advantage of the insights from your research?
At this point, the research is very young and needs lots of funding in order to fulfill its promise. The real goal of this research is to be able to accurately predict an attacker’s next steps and to control the terms of engagement with the attackers.
The best I can hope for at this point is for end users to demand better from vendors and service providers. Currently, what passes for most threat intelligence is a collection of known attack steps taken by adversaries over the years. This sounds a lot like the “signature” model used in traditional antivirus, and we saw how well that worked. Threat intelligence needs to benefit from incorporating tools and frameworks from other disciplines; we need to utilize what works from other industries. Users need to push back on vendors and demand more and better products from them.
Thank you so much, Char. This has been extremely eye-opening.
Thank you, Diana, for this opportunity!
Have you looked into attribution and Sample’s research? What are your thoughts on how attribution can help with better cyberdefense and threat intelligence? Please let us know in the comments below.
Executive Security Advisor, IBM Security
Diana Kelley is an internationally recognized information security expert, speaker, strategic advisor, market analyst and writer. She has over 20 years of IT...