March 1, 2018 By David Bisson 2 min read

Researchers from the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) and Harvard University proposed a new system that enforces privacy protections for users — without any help from their web browsers.

The system, known as Veil, allows developers to set up private browsing measures for their pages. According to the researchers’ paper, the system requires no assistance from the user’s web browser, yet can still reduce the likelihood of information leakages resulting from a browser’s privacy mode.

A Unique Approach to Private Browsing

After developers feed their HTML and CSS files through Veil’s compiler, the system searches for cleartext URLs in the data. It then applies the user’s secret key to the URLs it locates and coverts them to blinded references — encrypted URLs that are cryptographically unlinkable — so that attackers can’t trace them back to their original forms.

Blinding servers then receive items uploaded by the compiler and collaborate with the page’s JavaScript to create the blinded URLs. The program also changes the syntax of a page’s content, which alters the clientside representation of the page for each user.

Where Web Browser Protections Fail

The use of blinding servers differentiates Veil from existing private browsing modes, which commonly use the file system or SQLite database to store a session’s data. However, these tools don’t completely delete that information when the session ends.

Curious individuals can also learn about a private browsing mode session by obtaining a webpage state using random access memory (RAM) reflections after the session’s termination. Such weaknesses make it difficult to fully protect users’ privacy when they’re using a private browsing mode.

In the paper, the researchers noted that the way browsers work also contributes to security gaps. “Web browsers are complicated platforms that are continually adding new features (and thus new ways for private information to leak),” they wrote. “As a result, it is difficult to implement even seemingly straightforward approaches for strengthening a browser’s implementation of incognito modes.”

Hope for the Future

The researchers asserted that Veil can have numerous practical applications for helping developers protect users’ digital privacy when browsing the web. For example, they envision developers of a whistleblowing service using the system to prevent employers from tracking visits to the site on employees’ workstations.

Veil can’t protect users’ privacy in every scenario, however. It only works against local attackers who access a user’s computer after terminating a private browsing session. In addition, the system is currently powerless against situations in which a bad actor compromises the computer during a protected session and uses keylogging to exfiltrate sensitive information. These risks highlight the importance of users and organizations taking appropriate steps to protect themselves against phishing attacks and other digital threats.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today