Contact tracing apps are designed to help public health agencies connect the dots by linking confirmed carriers of novel coronavirus to recent, close-proximity interactions. In theory, this creates a protective safety net — a way for countries to manage the spread and mitigate the impact of COVID-19 at scale.

Despite good intentions, the push for third-party pairings produces a more pressing problem: privacy. In a June 2020 article about contact tracing apps, Forbes notes that failing confidence in app security could significantly reduce user adoption and limit the efficacy of this approach. Already, Norway has been ordered to shutter its contact tracing program and stop all data collection unless its government can demonstrate measurable benefits that balance privacy concerns.

Deployment speed is also a challenge. According to news website TechHQ, public health agencies looking to roll out centralized solutions need time to build out sufficient data collection, storage and analysis infrastructure. However, decentralized options — led by an unprecedented partnership between Google and Apple — offer a workaround by providing the information technology (IT) backbone health organizations need to get tracing efforts off the ground. As enterprises reopen offices and put employees back in the field, there’s a case to be made for in-house app development to support workforce health and boost application uptake.

But data protection issues don’t vanish just because organizations build apps in their own backyard. Here’s a break down of the top five challenges enterprises should overcome to deliver on the potential of prescriptive privacy.

The Contact Conundrum

Contact tracing applications are designed to collect COVID-19 testing data (inputted by a user) and user location history to form a network of potentially infected user connections. It then notifies the connections via a mobile device when one user has been infected to help stop the spread. But just like governments and tech giants, enterprises tackling their own tracing technology must address key challenges.

  • Authority: The National Institute of Standards and Technology (NIST) Special Publication 800-53 Security and Privacy Controls for Federal Information Systems and Organizations cites that agencies must establish and document legal authority that “permits the collection, use, maintenance and sharing of personally identifiable information (PII).” They must also clearly define the purpose of their application.

    The same applies to enterprises designing and deploying contact tracing apps. The current public health crisis may offer solid ground for initial authority. But, it still is essential to define the scope of collected data and describe its intended purpose to ensure employees provide informed consent. 

  • Anonymity: Time, location and testing status are critical for contact tracing apps to function. But, protecting user privacy requires the de-identification of data. However, common anonymization techniques, such as adding noise and dataset sampling aren’t always effective. Emerging artificial intelligence (AI) and machine learning (ML) tools can help reduce the risk of accidental identity exposure. 
  • Access: If threat actors can gain access to critical data, they could deliberately force the quarantine of entire departments by sending out false alerts. In turn, this can cause profit loss for enterprises and potential stress for staff. To enhance app security, businesses must deploy intelligent defenses, such as WAF and RASP tools, capable of detecting and deflecting potential issues on-demand. 
  • Accuracy: NIST standards also identify data quality and integrity as critical components of any application that involves PII. Quality starts with accuracy. Enterprise contact tracing applications must include the capability to collect personal data directly from staff, rather than electronic intermediaries. Meanwhile, integrity relies on the ability of applications to regularly check the reliability of current data and request new information as required.
  • Accountability: The foundation of successful contact tracing applications is consent. Here, accountability is key. Users must be fully informed before opting-in. Users must also be permitted to opt-out on demand. Enterprises should develop policies and procedures that facilitate the swift removal of current and historic user data to ensure regulatory compliance. This is especially critical as regulatory legislation evolves. Under the European Union’s GDPR, for example, organizations have one month to remove requested data or face potential fines.

The Prescriptive Potential

Integrated IT resources available to enterprises offer the ideal constructive framework for contact tracing apps. Staff can benefit from the speed and specificity of connective, prescriptive protection. But, the shift to community-focused computing also presents new privacy challenges. Success in the “new normal” may require a dual approach to security. Companies should think about combining soft skills in cybersecurity with hard lines on data protection to deliver solutions capable of balancing public good against personal privacy.

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today