If you ask your IT staff about passwords, they’ll probably advise you to use long and complicated codes, never reuse passwords across different websites, change passwords regularly and never write them down anywhere. You may think of your IT staff as paranoid, but their fear of passwords getting stolen by cybercriminals is more than justified: More than 1 billion personal data records were reported stolen in 2014 alone, most of which contained user passwords. Statistically speaking, every second reader of this article has had his or her password stolen already.

Rather than making passwords more secure, typical password restrictions mainly make them less convenient. This is quite unfair: The main threat to password security today comes from server compromise, yet the burden to protect login credentials and information is put entirely on the user. By taking a different spin on the overall setting, we can move the burden of password protection back where it belongs — the server. Short and simple passwords can be secure, they just need to be verified differently.

A Single-Server Point of Failure

When creating a password, we’re usually asked to make it as complicated as possible, combining uppercase and lowercase letters, numbers and special characters. But why is that actually necessary? Most services will block an account or require secondary authentication if an incorrect password is entered too many times. So wouldn’t it be sufficient to have a password that cannot be guessed in, say, a dozen attempts? In fact, it would be sufficient — if the login form of the service was the only way to verify whether a password attempt is correct.

The problem is that not a week goes by without another reported server breach. Passwords are rarely stored in the clear, but the server must store some derived piece of information (e.g., a password hash) that allows it to verify whether an incoming password is correct. When that piece of information falls into the wrong hands, attackers are no longer limited by any account blocking by the server and can try as many passwords as they want in a so-called offline or brute-force attack.

So the real reason to choose a complicated password is to increase the number of possible combinations to make an offline attack more difficult. The truth is, though, that you’re unlikely to win this game. Humans are not good at memorizing random character sequences, so they choose derivations of words or phrases. According to the National Institute of Standards and Technology (NIST), a human-generated password of 16 characters contains only about 30 bits of entropy, which translates to about 1 billion possibilities. With modern password-cracking devices testing more than 300 billion passwords per second, even your 16-character password will be cracked in no time.

Don’t Put All Your Eggs in One Basket

The problem is that if there’s a single server that can tell you whether your password is correct, then when that server gets hacked, your password is broken. The natural solution is to split the information to verify passwords over multiple servers so that all machines must work together to figure out whether a password is correct.

Under these circumstances, the attacker now has to break into all the servers to have a chance at recovering passwords. This can be made extremely difficult by letting the servers run different operating systems at different locations, all while being managed by different system administrators.

Cryptographic protocols for performing distributed password verification may not be included in every off-the-shelf crypto library, but they have been known in cryptographic literature and have even been offered in commercial products for more than a decade. A crucial feature for such protocols is not only to resist server compromise, but also to allow servers to refresh their keys so that they can securely recover after a compromise. Without a recovery mechanism, it’s only a matter of time until all servers have been hacked and the passwords are leaked.

Servers Can Learn to Recover Quickly From Lost Passwords

Until recently, recovery mechanisms for distributed password verification protocols were either not fully understood in terms of security or too inefficient for practice in high-volume settings. On Oct. 12, at the 22nd ACM Conference on Computer and Communications Security in Denver, we presented a new verification protocol that is highly efficient and at the same time adheres to some of the strictest provable security standards known in the field.

With only a single elliptic-curve multiplication per authentication per server, the protocol is essentially as efficient as one could hope for. The clever key refresh mechanism is so efficient that it can be done proactively at regular time intervals rather than waiting for an actual breach to occur. A prototype implementation already processes more than 100 login attempts per second on a single server core; we expect that our code can be further optimized to achieve a multiple of that.

At this cost, there’s almost no excuse for companies to lose any more user passwords as a result of a server breach. Perhaps 12345 will never be a good password, but the days of cycling through your touch-screen keyboard to find that super-secure special symbol may finally be over.

Read the complete paper on Optimal Distributed Password Verification

More from Data Protection

How data residency impacts security and compliance

3 min read - Every piece of your organization’s data is stored in a physical location. Even data stored in a cloud environment lives in a physical location on the virtual server. However, the data may not be in the location you expect, especially if your company uses multiple cloud providers. The data you are trying to protect may be stored literally across the world from where you sit right now or even in multiple locations at the same time. And if you don’t…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

The compelling need for cloud-native data protection

4 min read - Cloud environments were frequent targets for cyber attackers in 2023. Eighty-two percent of breaches that involved data stored in the cloud were in public, private or multi-cloud environments. Attackers gained the most access to multi-cloud environments, with 39% of breaches spanning multi-cloud environments because of the more complicated security issues. The cost of these cloud breaches totaled $4.75 million, higher than the average cost of $4.45 million for all data breaches.The reason for this high cost is not only the…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today