If you ask your IT staff about passwords, they’ll probably advise you to use long and complicated codes, never reuse passwords across different websites, change passwords regularly and never write them down anywhere. You may think of your IT staff as paranoid, but their fear of passwords getting stolen by cybercriminals is more than justified: More than 1 billion personal data records were reported stolen in 2014 alone, most of which contained user passwords. Statistically speaking, every second reader of this article has had his or her password stolen already.

Rather than making passwords more secure, typical password restrictions mainly make them less convenient. This is quite unfair: The main threat to password security today comes from server compromise, yet the burden to protect login credentials and information is put entirely on the user. By taking a different spin on the overall setting, we can move the burden of password protection back where it belongs — the server. Short and simple passwords can be secure, they just need to be verified differently.

A Single-Server Point of Failure

When creating a password, we’re usually asked to make it as complicated as possible, combining uppercase and lowercase letters, numbers and special characters. But why is that actually necessary? Most services will block an account or require secondary authentication if an incorrect password is entered too many times. So wouldn’t it be sufficient to have a password that cannot be guessed in, say, a dozen attempts? In fact, it would be sufficient — if the login form of the service was the only way to verify whether a password attempt is correct.

The problem is that not a week goes by without another reported server breach. Passwords are rarely stored in the clear, but the server must store some derived piece of information (e.g., a password hash) that allows it to verify whether an incoming password is correct. When that piece of information falls into the wrong hands, attackers are no longer limited by any account blocking by the server and can try as many passwords as they want in a so-called offline or brute-force attack.

So the real reason to choose a complicated password is to increase the number of possible combinations to make an offline attack more difficult. The truth is, though, that you’re unlikely to win this game. Humans are not good at memorizing random character sequences, so they choose derivations of words or phrases. According to the National Institute of Standards and Technology (NIST), a human-generated password of 16 characters contains only about 30 bits of entropy, which translates to about 1 billion possibilities. With modern password-cracking devices testing more than 300 billion passwords per second, even your 16-character password will be cracked in no time.

Don’t Put All Your Eggs in One Basket

The problem is that if there’s a single server that can tell you whether your password is correct, then when that server gets hacked, your password is broken. The natural solution is to split the information to verify passwords over multiple servers so that all machines must work together to figure out whether a password is correct.

Under these circumstances, the attacker now has to break into all the servers to have a chance at recovering passwords. This can be made extremely difficult by letting the servers run different operating systems at different locations, all while being managed by different system administrators.

Cryptographic protocols for performing distributed password verification may not be included in every off-the-shelf crypto library, but they have been known in cryptographic literature and have even been offered in commercial products for more than a decade. A crucial feature for such protocols is not only to resist server compromise, but also to allow servers to refresh their keys so that they can securely recover after a compromise. Without a recovery mechanism, it’s only a matter of time until all servers have been hacked and the passwords are leaked.

Servers Can Learn to Recover Quickly From Lost Passwords

Until recently, recovery mechanisms for distributed password verification protocols were either not fully understood in terms of security or too inefficient for practice in high-volume settings. On Oct. 12, at the 22nd ACM Conference on Computer and Communications Security in Denver, we presented a new verification protocol that is highly efficient and at the same time adheres to some of the strictest provable security standards known in the field.

With only a single elliptic-curve multiplication per authentication per server, the protocol is essentially as efficient as one could hope for. The clever key refresh mechanism is so efficient that it can be done proactively at regular time intervals rather than waiting for an actual breach to occur. A prototype implementation already processes more than 100 login attempts per second on a single server core; we expect that our code can be further optimized to achieve a multiple of that.

At this cost, there’s almost no excuse for companies to lose any more user passwords as a result of a server breach. Perhaps 12345 will never be a good password, but the days of cycling through your touch-screen keyboard to find that super-secure special symbol may finally be over.

Read the complete paper on Optimal Distributed Password Verification

More from Data Protection

How to craft a comprehensive data cleanliness policy

3 min read - Practicing good data hygiene is critical for today’s businesses. With everything from operational efficiency to cybersecurity readiness relying on the integrity of stored data, having confidence in your organization’s data cleanliness policy is essential.But what does this involve, and how can you ensure your data cleanliness policy checks the right boxes? Luckily, there are practical steps you can follow to ensure data accuracy while mitigating the security and compliance risks that come with poor data hygiene.Understanding the 6 dimensions of…

Third-party access: The overlooked risk to your data protection plan

3 min read - A recent IBM Cost of a Data Breach report reveals a startling statistic: Only 42% of companies discover breaches through their own security teams. This highlights a significant blind spot, especially when it comes to external partners and vendors. The financial stakes are steep. On average, a data breach affecting multiple environments costs a whopping $4.88 million. A major breach at a telecommunications provider in January 2023 served as a stark reminder of the risks associated with third-party relationships. In…

Communication platforms play a major role in data breach risks

4 min read - Every online activity or task brings at least some level of cybersecurity risk, but some have more risk than others. Kiteworks Sensitive Content Communications Report found that this is especially true when it comes to using communication tools.When it comes to cybersecurity, communicating means more than just talking to another person; it includes any activity where you are transferring data from one point online to another. Companies use a wide range of different types of tools to communicate, including email,…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today