December 17, 2020 By Eric Maass 6 min read

They come in the mail — ominous-looking envelopes that are devoid of branding and obvious marketing embellishments. Plain white, marked with often-unrecognizable return addresses, and relatively thin — the contents of which will spoil your day and cause you to lose yet a bit more faith in humanity. No, it’s not a tax bill; rather, it’s yet another notice that a company you’ve trusted to handle your personal information has been breached, and your data has been stolen.

As you read through the letter, you immediately recognize the format — short, devoid of any real explanation, potentially some attempt to downplay the seriousness of data that may now be in the hands of criminals, and a generally uncomfortable sense of routineness to it. Of course, you’ll be offered some form of free credit monitoring (for the next year or two), with the promise that some league of identity theft warriors will be there to defend your good name when, and if, the day comes that you need them.

However, you’re smarter than you’re given credit for — you understand that the damage is done, and there’s little you can do to put the genie back in the bottle. Sadly, once your personal information has been stolen, it’s difficult, if not impossible, to unwind the impacts. While passwords can be changed, your government-issued identifiers (e.g. Social Security Number), date of birth, healthcare records, biometrics and various other forms of data are often with you for life. Even data that can be changed, such as banking and financial information, is far from simple to address, and the impacts of which are systemic throughout various often-longstanding relationships you may hold with umpteen institutions.

While you stare at the letter, the same thought goes through your head as the last time this occurred — can’t we do better?

But, to be completely fair about it, the organization issuing that letter is often going through a similar moment of reflection and contemplation. There’s been no shortage of spending on information security in the past decade, and unless this breach came down to negligence, it’s reasonable to assume that even the most astute organizations can find themselves in this position at some point. Why? Simply because there’s no such thing as a system, network or application that’s 100% secure, and as your personal data traverses those assets, there remains a risk of exposure — even in the hands of the most trustworthy organizations.

What About Encryption?

The average consumer is led to believe data is encrypted, and therefore, it’s always safe from that point forward. Clearly, this isn’t the case. Our best defense practices enforce that data is encrypted in-transit (over the wire) and at-rest (stored in a database). However, the most obvious exposure is still lack of encryption while data is in use.

And therein lies one of the biggest challenges of secure computing — data must be decrypted to utilize it. Whether we want to analyze it, perform math upon it, update it, or train a machine learning model with it, any form of computation or manipulation of that data requires it to be in clear text — and therefore vulnerable.

For organizations that are working with sensitive data, there exists a careful balance between using data to the full extent of its value while also protecting it. In many instances, the two concepts are incompatible at their extremes. For example, a healthcare institution may be interested in sharing patient data with clinical researchers to help train machine learning models that can identify disease markers. A retailer may be interested in monetizing consumer profiles as part of targeted marketing campaigns. A financial services firm may be interested in using account data and user behavioral data to build better fraud detection algorithms. In each of these instances, the data steward needs to balance the reward of those activities with the risk of exposing this sensitive data.

Meanwhile, world governments have not been standing still on the issue of data privacy. In the past several years, we’ve witnessed some of the most aggressive shifts to date intending to improve the privacy rights of consumers while holding organizations that process their information more accountable. The introduction of GDPR in Europe is perhaps the most prominent example, albeit far from alone. Regulations are also placing steep fines on organizations that are non-compliant and fail to protect client data.

So, I go back to the question that naturally arises as we all stare into the depths of a seemingly inevitable breach notification letter that arrives in the mail — can’t we do better? We’re encrypting data in-transit and at-rest, but what about while in-use, where data is vulnerable today?

Register for the webinar

A Guide to Fully Homomorphic Encryption

What if organizations were able to compute upon sensitive data while the data itself remained in an encrypted state? As it turns out, it’s possible. The answer comes in the form of what’s referred to as fully homomorphic encryption (FHE).

Several decades ago, cryptographers asked the same question — what if data could remain encrypted while it’s computed upon, therefore preserving the confidentiality (and privacy) of the data as the computations are done? Theory became reality in 2009 when Craig Gentry showed FHE was plausible.

However, processing even a single bit took massive computing power and too much time. Over the past decade, researchers, such as those at IBM Research, have been working steadfastly to make FHE more practical and make it much more efficient. IBM Research developed open source libraries, such as HELib and FHE Toolkit, which have brought this game-changing domain of cryptography to more people.

Preserving Privacy During Computing

The promise of fully homomorphic encryption is exciting, for sure. The ability to keep data encrypted during computation not only means we’re able to better address security of data while it is in use, but it also goes a step further to begin solving the problem of protecting the ‘intent’ or subject of a request to a third party.

Picture the various location-based applications running on your mobile phone. Each time you open one of these apps, it shares your location (GPS coordinates) with a third party. For example, you may be interested in seeing a list of nearby restaurants. In order to do so, your phone’s location is shared with a third party cloud service to generate that list of eateries — a convenience that has cost you your privacy. But, what if your location could be sent to this service in an encrypted state, never exposing your location, and the resultant list of restaurants only readable to your device when returned? Using FHE it now becomes possible to protect the intent, or subject, of a query, thereby preserving the privacy of the search. In this instance, you no longer need to give away your location.

FHE is poised to fundamentally change the way we compute upon sensitive data, allowing owners and processors of that data alike to maximize its value while better preserving privacy. Providing data to third parties for analytics, training machine learning models and even basic search and retrieval will all benefit from the ability to encrypt data while in-use.

The State of Fully Homomorphic Encryption Today

So why is FHE not in broad use today? Simply put, various hurdles still exist for organizations to adopt this technology. The industry is still in the early days of shifting FHE from research labs into commercial applications. Some of the challenges we face today include:

  • The complexity of the technology. Developers without a thorough cryptography background may struggle to understand concepts, including effectively translating those concepts into coding practices.
  • Limitations. Traditional methods of coding may be limited or different when performing FHE-based computation, meaning providers may need to educate the developers.
  • Computing power. While fully homomorphic encryption has become more efficient over the years, certain operations can require hundreds of times the compute resources when compared with an equivalent operation on plaintext data.
  • Infrastructure. Working with FHE may require non-traditional infrastructure capabilities, such as the ability to manage lattice encryption keys, not routinely available to developers.

But, there are ways to begin bridging this gap and preparing to tackle the early stages of an FHE journey.

How to Choose Homomorphic Encryption Services

Designed to help clients begin their FHE journey with fundamental education and prototyping capabilities, a best-in-class FHE service combines the expertise of live cryptography consultants with the convenience of fully managed services. This allows clients to begin experimenting with the technology in a frictionless way, ensuring developers can get up to speed on foundational elements of the technology while avoiding the need to build out FHE-ready application development and hosting environments.

You’ll want to seek out a service that offers an education program and the necessary tools to get started, such as development libraries, sample code and perhaps even a cloud-hosted integrated development environment (IDE). Such a comprehensive approach can help organizations come up to speed on the basics of FHE before getting hands-on to build their first FHE-enabled application. From there, you can start leveraging your new skills in a purpose-built prototyping environment — building, running and experimenting with FHE-enabled apps.

Likewise, a strong service provider and partner will look to leverage your use cases, scenarios and input to guide the future of products, software development kits, services and experiences that evolve the future direction of FHE, helping to ensure the needs of tomorrow’s developers, data scientists, and data stewards are met.

Into the Future

Without a doubt, FHE promises to transform one of the most important aspects of protecting privacy and confidentiality — the notion that we can compute upon data while it remains in an encrypted state. In these early days, it’s important for enterprise to get used to and identify those use cases where FHE will be most impactful, especially as it will coexist alongside traditional forms of encryption that continue efficiently serving their purposes. While FHE certainly won’t put an end to the ominous data breach notification arriving by mail, it promises to go a long way to reducing exposures that lead to these events. Moreover, for entities that walk the tightrope of extracting value from data while preserving privacy, putting FHE into practice may prove to be an impactful way to maximize both.

Consider the pilot release of first-of-its-kind services for FHE with IBM Security Homomorphic Encryption Services. Learn how IBM is at the forefront of bringing this game-changing technology to clients and helping to kick-start their journey into fully homomorphic encryption.

More from Data Protection

Third-party access: The overlooked risk to your data protection plan

3 min read - A recent IBM Cost of a Data Breach report reveals a startling statistic: Only 42% of companies discover breaches through their own security teams. This highlights a significant blind spot, especially when it comes to external partners and vendors. The financial stakes are steep. On average, a data breach affecting multiple environments costs a whopping $4.88 million. A major breach at a telecommunications provider in January 2023 served as a stark reminder of the risks associated with third-party relationships. In…

Communication platforms play a major role in data breach risks

4 min read - Every online activity or task brings at least some level of cybersecurity risk, but some have more risk than others. Kiteworks Sensitive Content Communications Report found that this is especially true when it comes to using communication tools.When it comes to cybersecurity, communicating means more than just talking to another person; it includes any activity where you are transferring data from one point online to another. Companies use a wide range of different types of tools to communicate, including email,…

SpyAgent malware targets crypto wallets by stealing screenshots

4 min read - A new Android malware strain known as SpyAgent is making the rounds — and stealing screenshots as it goes. Using optical character recognition (OCR) technology, the malware is after cryptocurrency recovery phrases often stored in screenshots on user devices.Here's how to dodge the bullet.Attackers shooting their (screen) shotAttacks start — as always — with phishing efforts. Users receive text messages prompting them to download seemingly legitimate apps. If they take the bait and install the app, the SpyAgent malware gets…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today