Authentication is a weak link in any enterprise security solution, primarily because it relies heavily on how people use it. It’s also one of the most important factors, and any flaws can lead to significant issues and costly cyberattacks. As just one example, earlier this year the IBM-discovered Dyre Wolf campaign stole over $1 million from targeted enterprise organizations by using sophisticated social engineering and others means to circumvent two-factor authentication.
This comes more than a year after the Heartbleed vulnerability dominated headlines. This OpenSSL flaw was a buffer overflow vulnerability that also allowed cybercriminals to bypass authentication on virtual private networks (VPNs).
Authentication Evolves
In the good old days — that is, before 1995 — authentication simply meant providing a password. It then matured with the introduction of multifactor authentication (MFA), whereby users are expected to use one of the following in addition to passwords:
- Something they knew (i.e., a PIN, secret question, etc.);
- Something they had (i.e., a magnetic card, mobile phone, etc.);
- Something they were or that was a part of them (i.e., a biometric factor).
Although MFA vastly improved upon the security of authentication mechanisms, it still had its own issues, including:
- People keep forgetting what they know. Security services have to provide mechanisms of retrieving or resetting that information, nullifying its effectiveness.
- People keep losing stuff that belongs to them, and legitimate users get locked out of a system. What you had or owned was also not the best option.
- What you are was better but still not perfect. Every type of biometric attribute has been broken in some way by cybercriminals, and until we develop a way to consistently use our DNA as a factor, biometrics also cannot claim to be totally perfect.
To make authentication even stronger, organizations started using additional attributes along with the multiple factors such as what you do and the context of the situation.
As authentication strength kept increasing, the time and effort required to crack it went up exponentially. It all sounded like good news, but not for too long. Cybercriminals did what humans usually do when faced with an insurmountable obstacle: either go around it or find the weakest spot in the obstacle and drill through it.
Attackers Break Through
That’s pretty much what happened with one of the strongest authentication schemes using Secure Sockets Layer (SSL). Attackers found a way to hijack the certificates and fool clients into believing that they are talking to a legitimate server. I term this “hacking without cracking,” and it is becoming more and more prevalent as authentication techniques keep beefing up their encryption schemes.
Please note that here, while users were not directly involved in the authentication process, there still was a vulnerability that was succesfully exploited. There are various types of authentication based on the actors involved, including machine to machine (M2M), application to application (A2A), user to application (U2A) and others. Although authentication is weakest where users are involved, this is one instance where the authentication scheme itself was weak enough to be exploited even without the human element.
A better solution is to have an authentication scheme that does not depend on certificates and also has mutual authentication. Here, the client authenticates with the server, and the server also authenticates with the client so the client knows it is talking to the legitimate server. A novel method of mutual authentication was developed by Whitfield Diffie and Martin Hellman way back in 1976.
The goal of the Diffie–Hellman key exchange (D–H) was to establish a shared secret over an unsecured communication channel without the risk of a man in the middle stealing the secret. Theoretically, it is still possible to crack the secret by trying to reverse engineer and guess all possible key values, but based on the key strength used for the exchange, the effort required to crack becomes extremely huge and unrealistic. However, with newer technologies like Quantum computing, you never know — it may become feasible to crack any such algorithm within a finite period. There are ways to harden the classic D–H key exchange, but that’s outside the scope of this article.
Of course, D–H is primitive and does not provide for clean authentication. But it was the best we had way back then. There are many modern schemes, like secure token service (STS) and Web token service (WTS), that provide for cleaner authentication, and STS has been implemented by vendors such as IBM, Amazon and Microsoft.
More Strides for Authentication
The other development that happened to make authentication more manageable and consistent was the FIDO Alliance. It takes multifactor authentication to the next level by setting MFA standards, implementing new authentication methods and eliminating the need for using only passwords. Companies such as Google have already endorsed its efforts.
Authentication is often a weak link in any security solution, and it is not sufficient to just keep beefing up the encryption and key sizes. Blindly adding multiple factors to the authentication process does not solve the core problem.
What will help is a careful design using mutual authentication, the selective use of a second factor such as biometrics and collaboration with industry standards like the FIDO Alliance. In addition to these, enterprise authentication applications need to regularly go through a thorough penetration testing routine.