September 11, 2018 By Mark Stone 4 min read

Law firms tasked with analyzing mounds of data and interpreting dense legal texts can vastly improve their efficiency by training artificial intelligence (AI) tools to complete this processing for them. While AI is making headlines in a wide range of industries, legal AI may not come to mind for many. But the technology, which is already prevalent in the manufacturing, cybersecurity, retail and healthcare sectors, is quickly becoming a must-have tool in the legal industry.

Due to the sheer volume of sensitive data belonging to both clients and firms themselves, legal organizations are in a prickly position when it comes to their responsibility to uphold data privacy. Legal professionals are still learning what the privacy threats are and how they intersect with data security regulations. For this reason, it’s critical to understand security best practices for operations involving AI.

Before tackling the cybersecurity implications, let’s explore some reasons why the legal industry is such a compelling use case for AI.

How Do Legal Organizations Use AI?

If you run a law firm, imagine how much more efficient you could be if you could train your software to recognize and predict patterns that not only improve client engagement, but also streamline the workflow of your legal team. Or what if that software could learn to delegate tasks to itself?

With some AI applications already on the market, this is only the beginning of what the technology can do. For example, contract analysis automation solutions can read contracts in seconds, highlight key information visually with easy-to-read graphs and charts, and get “smarter” with each contract reviewed. Other tools use AI to scan legal documents, case files and decisions to predict how courts will rule in tax decisions.

In fact, the use of AI in the legal industry has been around for years, according to Sherry Askin, CEO of Omni Software Systems. Askin has deep roots in the AI field, including work with IBM’s Watson.

“AI is all about increasing efficiency, and is being touted as the next revolution,” she said. “We’ve squeezed as much as we can from human productivity through automation. The next plateau from productivity and the next threshold is AI.”

Why Machine Learning Is Critical

Law is all about words, natural language and the coded version of an unstructured version, said Askin. While we know how to handle the coded versions, she explained, the challenge with legal AI is that outputs are so tightly tailored to past results described by their inputs. That’s where machine learning comes in to predict how these inputs might change.

Askin compared machine learning to the process of intellectual development by which children soak up news words, paragraphs, long arguments, vocabulary and, most importantly, context. With deep learning, not only are you inputting data, but you’re giving the machine context and relevance.

“The machine is no longer a vessel of information,” Askin explained. “It figures out what to do with that information and it can predict things for you.”

Although machines can’t make decisions the same way that humans can, the more the neural processing and training they conduct, the more sophisticated their learning and deliverables can become. Some legal AI tools can process and analyze thousands of lease agreements, doing in seconds what humans would do in weeks.

How Do Privacy Regulations Impact Legal Firms?

For any industry, protecting privileged client data is a paramount concern. The American Bar Association, which requires practitioners to employ reasonable efforts to prevent unauthorized access to client data, has implemented periodic changes and updates to address the advances of technology. In addition, the Legal Cloud Computing Association (LCCA) issued 21 standards to assist law firms and attorneys in addressing these needs, including testing, limitations on third-party access, data retention policy, encryption, end user authentication and modifications to data.

Askin urged legal organizations to evaluate strategies impacting security and privacy in the context of what they modify or replace.

“I believe this is a major factor in legal because the profession has a deep legacy of expert-led art,” she said. “Traditional IT automation solutions perform best with systematized process and structured data. Unfortunately, systematization and structure are not historically compatible with the practice of law or any other professional disciplines that rely on human intelligence and dynamic reasoning.”

How to Keep Legal AI Tools in the Right Hands

Legal organizations are tempting targets for malicious actors because they handle troves of sensitive and confidential information. Rod Soto, director of security research for Jask, recommended several key strategies: employ defense in depth principles at the infrastructure level, train personnel in security awareness and use AI to significantly enhance security posture overall. To protect automated operations conducted by AI, Soto warned, we must understand that while these AI systems are trained to be effective, they can also be steered off course.

“Malicious actors can and will approach AI learning models and will attempt to mistrain them, hence the importance of feedback loops and sanity checks from experienced analysts,” he said. “You cannot trust AI blindly.”

Finally, it’s crucial for legal organizations to understand that AI does not replace a trained analyst.

“AI is there to help the analyst in things that humans have limitations, such as processing very large amounts of alarms or going through thousands of events in a timely manner,” said Soto. “Ultimately, it is upon the trained analyst to make the call. An analyst should always exercise judgment based on his experience when using AI systems.”

Because the pressure to transform is industrywide, profound changes are taking shape to help security experts consistently identify the weakest link in the security chain: people.

“It’s nearly impossible to control all data and privacy risks where decentralized data and human-managed processes are prevalent,” Askin said. “The greater the number of endpoints, the higher the risk of breach. This is where the nature of AI can precipitate a reduction in security and privacy vulnerabilities, particularly where prior IT adoption or data protection practices were limited.”

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today