If you ask your IT staff about passwords, they’ll probably advise you to use long and complicated codes, never reuse passwords across different websites, change passwords regularly and never write them down anywhere. You may think of your IT staff as paranoid, but their fear of passwords getting stolen by cybercriminals is more than justified: More than 1 billion personal data records were reported stolen in 2014 alone, most of which contained user passwords. Statistically speaking, every second reader of this article has had his or her password stolen already.

Rather than making passwords more secure, typical password restrictions mainly make them less convenient. This is quite unfair: The main threat to password security today comes from server compromise, yet the burden to protect login credentials and information is put entirely on the user. By taking a different spin on the overall setting, we can move the burden of password protection back where it belongs — the server. Short and simple passwords can be secure, they just need to be verified differently.

A Single-Server Point of Failure

When creating a password, we’re usually asked to make it as complicated as possible, combining uppercase and lowercase letters, numbers and special characters. But why is that actually necessary? Most services will block an account or require secondary authentication if an incorrect password is entered too many times. So wouldn’t it be sufficient to have a password that cannot be guessed in, say, a dozen attempts? In fact, it would be sufficient — if the login form of the service was the only way to verify whether a password attempt is correct.

The problem is that not a week goes by without another reported server breach. Passwords are rarely stored in the clear, but the server must store some derived piece of information (e.g., a password hash) that allows it to verify whether an incoming password is correct. When that piece of information falls into the wrong hands, attackers are no longer limited by any account blocking by the server and can try as many passwords as they want in a so-called offline or brute-force attack.

So the real reason to choose a complicated password is to increase the number of possible combinations to make an offline attack more difficult. The truth is, though, that you’re unlikely to win this game. Humans are not good at memorizing random character sequences, so they choose derivations of words or phrases. According to the National Institute of Standards and Technology (NIST), a human-generated password of 16 characters contains only about 30 bits of entropy, which translates to about 1 billion possibilities. With modern password-cracking devices testing more than 300 billion passwords per second, even your 16-character password will be cracked in no time.

Don’t Put All Your Eggs in One Basket

The problem is that if there’s a single server that can tell you whether your password is correct, then when that server gets hacked, your password is broken. The natural solution is to split the information to verify passwords over multiple servers so that all machines must work together to figure out whether a password is correct.

Under these circumstances, the attacker now has to break into all the servers to have a chance at recovering passwords. This can be made extremely difficult by letting the servers run different operating systems at different locations, all while being managed by different system administrators.

Cryptographic protocols for performing distributed password verification may not be included in every off-the-shelf crypto library, but they have been known in cryptographic literature and have even been offered in commercial products for more than a decade. A crucial feature for such protocols is not only to resist server compromise, but also to allow servers to refresh their keys so that they can securely recover after a compromise. Without a recovery mechanism, it’s only a matter of time until all servers have been hacked and the passwords are leaked.

Servers Can Learn to Recover Quickly From Lost Passwords

Until recently, recovery mechanisms for distributed password verification protocols were either not fully understood in terms of security or too inefficient for practice in high-volume settings. On Oct. 12, at the 22nd ACM Conference on Computer and Communications Security in Denver, we presented a new verification protocol that is highly efficient and at the same time adheres to some of the strictest provable security standards known in the field.

With only a single elliptic-curve multiplication per authentication per server, the protocol is essentially as efficient as one could hope for. The clever key refresh mechanism is so efficient that it can be done proactively at regular time intervals rather than waiting for an actual breach to occur. A prototype implementation already processes more than 100 login attempts per second on a single server core; we expect that our code can be further optimized to achieve a multiple of that.

At this cost, there’s almost no excuse for companies to lose any more user passwords as a result of a server breach. Perhaps 12345 will never be a good password, but the days of cycling through your touch-screen keyboard to find that super-secure special symbol may finally be over.

Read the complete paper on Optimal Distributed Password Verification

More from Data Protection

Why safeguarding sensitive data is so crucial

4 min read - A data breach at virtual medical provider Confidant Health lays bare the vast difference between personally identifiable information (PII) on the one hand and sensitive data on the other.The story began when security researcher Jeremiah Fowler discovered an unsecured database containing 5.3 terabytes of exposed data linked to Confidant Health. The company provides addiction recovery help and mental health treatment in Connecticut, Florida, Texas and other states.The breach, first reported by WIRED, involved PII, such as patient names and addresses,…

Addressing growing concerns about cybersecurity in manufacturing

4 min read - Manufacturing has become increasingly reliant on modern technology, including industrial control systems (ICS), Internet of Things (IoT) devices and operational technology (OT). While these innovations boost productivity and streamline operations, they’ve vastly expanded the cyberattack surface.According to the 2024 IBM Cost of a Data Breach report, the average total cost of a data breach in the industrial sector was $5.56 million. This reflects an 18% increase for the sector compared to 2023.Apparently, the data being stored in industrial control systems is…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today