November 22, 2013 By Diana Kelley 3 min read

This is a weekly post where we address questions of interest to the Application Information Security Community. To that end, we’d love to hear your questions! Please Tweet us with the hashtag #ThinkAppSec or leave us a comment below and we’ll pick one or two questions from that list.

This week Jason Bellomy and I had the opportunity to address the CISO Executive Breakfast sessions in Washington, DC and Pittsburgh, PA.  These questions were inspired by discussions at those sessions.

 

1. Will the legal landscape change if software vendors can be sued without damages or loss being proven?

This question has no easy answer and the answer will evolve as the case law does. “Software vendors have traditionally refused to take responsibility for the security of their software, and have used various risk allocation provisions of the Uniform Commercial Code (U.C.C.) to shift the risk of insecure software to the licensee.” And early cases, like Chatlos Sys., Inc. v. Nat’l Cash Register Corp and that attempted to sue vendors for insecure software failed. But the tide may be turning and tort law may help customers in the future prove damages against software vendors if lack of due care and financial or economic damage can be proven.

Under Section 5 of the FTC Actunfair or deceptive acts or practices in or affecting commerce” is prohibited. And under section 8 of the Federal Deposit Insurance Act, the Board has the authority to take appropriate action when unfair or deceptive acts or practices are discovered. One of the first set of charges brought by the FTC relating to unfair or deceptive acts was against BJs. In that case credit card data was stolen and fraudulent charges were made on the the accounts, so there was loss or damage involved. But the recent settlement between FTC and HTC USA came about with no direct loss. The software on the HTC phones was quietly vulnerable, but there were no damages claimed, just the potential for loss. And Cardinal Health sued software maker AllScripts because the medical records software was not compliant with new federal rules not because actual patient information was lost.

It looks like the landscape is definitely changing for software vendors, but what the new rules will depend, in part, on how the courts rule.

 2. What is PII – How much can the definition expand?

Classifying some information is personal and sensitive is pretty straightforward – birthdate? social security number? mother’s maiden name? We know those are all considered personal and sensitive. In the healthcare world, any health related personally identifiable data comes under the classification of PHI.

But can this expand? Are shopping records kept by websites like Amazon and brick and mortar grocery stores potentially PII? What about your NetFlix viewing history?

The US General Services Administration updated the definition of PII In the appendix of OMB M-10-23 (Guidance for Agency Use of Third-Party Website and Applications) to:

“Information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other personal or identifying information that is linked or linkable to a specific individual. The definition of PII is not anchored to any single category of information or technology. Rather, it requires a case-by-case assessment of the specific risk that an individual can be identified.”

Which is very broad.

Now consider overlay analytics from a loyalty program that can assess insurance risk from shopping habits: “customers who drink lots of milk and eat lots of red meat are very, very good car insurance risks versus those who eat lots of pasta and rice, fill up their petrol at night, and drink spirits.”  What else might one  be able to discern from someone’s grocery list – if someone is vegan and possibly even a religious affiliation – which are pretty personal. And on a smart grid electricity information can indicate when the customer is home and, perhaps, how late they stay up.

It’s possible, that in the future, more companies may need to use PCI type card data protections for a host of their analytics data.

For now, best practice is to disclose to consumers what your company is gathering and how that data will be used and stored. Then make sure you adhere to your own policy.

 

Previous Weeks

Week 1- What is the importance of software security in supply chain management?

Week 2 – Who Should be Responsible for Application Security Testing?

Week 2 – Can “generated code” be tested?

Week 3 – How do we secure application vulnerabilities and code development, particularly for mobile and social applications that are built by business units or reside on the cloud?

Week 3 – As a CISO, how can I control my organization’s testing methodologies, change management and deployment processes, without compromising on quality and project timelines?

 

Submit your questions via Twitter using #ThinkAppSec

 

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today