March 14, 2017 By Douglas Bonderud 2 min read

Data is a valuable resource for corporations. Beyond the information generated by workstations, mobile devices and online transactions, companies now leverage social data to get a better sense of consumer buying habits, personal preferences and transaction histories.

But in many cases, the nature and purpose of this data collection isn’t made clear — and law enforcement agencies are now tapping third party data-mining operations to purchase specific data related to potentially criminal activity and design surveillance tools.

It’s no surprise that Facebook ranks among the most sought-after data destinations. According to TechCrunch, the company recently changed its policy to explicitly forbid developers from using social media data for this purpose. But will this really put the breaks on sneaky surveillance operations?

Social Data Is All Around

Corporations have the power to hurt or help consumers who want better protection of their own data. Law enforcement agencies don’t need custom-built tools to carry out surveillance; mobile apps often collect and aggregate consumer data that is then sold off to marketing agencies and could be used for other purposes as well.

Direct law enforcement requests are also on the rise. In February, the Bentonville, Arkansas, Police Department went after a warrant for Amazon Echo interactions initiated by a murder suspect. Additionally, the Financial Times stated more than 200 million wearable devices were provided to employees by their organizations in 2016, often without any kind of user agreement about how, when and why tracking data could be shared.

Line in the Sand

Part of the problem here stems from consumers themselves, since many grant blanket permissions to mobile apps and don’t read user agreements before they sign off on wearable devices. But there’s another layer: Anonymous collection of data that is then repackaged and repurposed as valuable insight for marketing or police agencies.

Facebook, Twitter and other social sites are a veritable gold mine of information for third parties looking to grab information and make a quick buck. Engadget explained the ACLU recently called out both Facebook and Twitter for not doing enough to combat this problem. Both sites were being mined for information about protesters’ posts, locations and identification by marketing firms and then sold off to law enforcement.

While Twitter already has a hard-and-fast rule in place, Facebook historically operated under a “wait and see” model — if problems were reported, the company clamped down on social data access. But thanks to increased pressure from the ACLU, Color of Change and the Center for Media Justice, the network has rewritten its policies to make it clear that developers cannot “use data obtained from us to provide tools that are used for surveillance,” The New York Times reported.

Moving Forward

It’s a solid first step, but now the real test begins: Will action follow words? While Facebook already uses manual and automatic detection to track down unsanctioned data use, the ACLU argued a more proactive approach is required. The social media site countered that it’s already doing just that, meaning there may be little impetus for change.

Social pressure has pushed one of the biggest social media sites in the world to explicitly forbid the use of data for surveillance tools. It’s a timely move since smartphones, wearables and other mobile devices are now being used to track everything from employee activity to protester activism.

With the change, however, comes increased surveillance of the social site itself — will it shine a light on surreptitious data collection or turn a blind eye when it comes to stopping surveillance?

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today