September 22, 2014 By Douglas Bonderud 2 min read

The marks are in, and they’re not good: According to Naked Security, a new app study by the Global Privacy Enforcement Network (GPEN) found that just 15 percent of all apps get a passing grade when it comes to data handling and privacy. Data from Gartner, meanwhile, predicts that over 75 percent of mobile applications will fail basic business-level security tests through 2015. So how do companies make sure their apps aren’t flunking out?

New App Study: D- For Privacy

The GPEN study looked at over 1,200 apps and found more than a few problems. First, 85 percent of those tested didn’t provide “clear information on how the app gathers, uses and shares private data on the user, to the extent that the user could feel confident in their understanding of how it works.”

What’s more, 30 percent of apps didn’t provide any kind of privacy warning or information, and more than three-quarters asked for at least one permission, such as device location or identification data.

A full 10 percent wanted access to the device’s camera, and almost as many tried to gain access to contact lists. Part of the problem is user expectation. “Free” apps come complete with the idea that they’ll try to access some private information or make money through in-app advertising. As regulations for the paid-for app market increase, more free applications arrive to fill the gaps, making it harder for companies to separate “functional” from “fraudulent.”

More Work Needed

According to Gartner, 90 percent of enterprises already use third-party commercial applications for their mobile bring-your-own-device (BYOD) strategy, and “app stores are filled with applications that mostly prove their advertised usefulness.” The problem? Three-quarters of these apps also fail basic security tests, leading to the prediction that, by 2017, the bulk of endpoint breaches will target smartphones and tablets.

Consider the recent Android Browser app breach, as reported by IGN. A flaw allowed the injection of malignant JavaScript code into the browser itself, letting hackers steal passwords and other information — and this is just the beginning.

To combat these types of mobile app issues, Gartner says more work is needed in areas such as static and dynamic application security testing as well as behavioral analysis tools that look for suspicious background actions when apps are running. For example, tests might monitor a file-sharing application that is trying to access device identification data and send it to an unknown IP address.

A Better Report Card

So how can apps score higher on privacy and security report cards? In large part, change must come from companies and users. As it stands, free apps multiply at a ferocious rate because they are consumed just as quickly. In many cases, employees are willing to risk “slight” privacy violations in exchange for ease of use.

Companies are encouraged to have a zero-tolerance policy when it comes to both free and paid apps. Unless permissions directly relate to an app’s function, they must be rejected. Opting for paid apps can help minimize risk, but only if businesses commit to vetting and scanning these apps just as rigorously as if they were created in-house. Simply put, anything that looks like a security issue is a security issue and must be treated as such.

Gartner’s data and the new app study make it clear that applications get a failing grade when it comes to user privacy and security. It’s a massive market, however, which means that any real change must come from within as users work to not let security failures impact performance by association.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today