September 11, 2018 By Mark Stone 4 min read

Law firms tasked with analyzing mounds of data and interpreting dense legal texts can vastly improve their efficiency by training artificial intelligence (AI) tools to complete this processing for them. While AI is making headlines in a wide range of industries, legal AI may not come to mind for many. But the technology, which is already prevalent in the manufacturing, cybersecurity, retail and healthcare sectors, is quickly becoming a must-have tool in the legal industry.

Due to the sheer volume of sensitive data belonging to both clients and firms themselves, legal organizations are in a prickly position when it comes to their responsibility to uphold data privacy. Legal professionals are still learning what the privacy threats are and how they intersect with data security regulations. For this reason, it’s critical to understand security best practices for operations involving AI.

Before tackling the cybersecurity implications, let’s explore some reasons why the legal industry is such a compelling use case for AI.

How Do Legal Organizations Use AI?

If you run a law firm, imagine how much more efficient you could be if you could train your software to recognize and predict patterns that not only improve client engagement, but also streamline the workflow of your legal team. Or what if that software could learn to delegate tasks to itself?

With some AI applications already on the market, this is only the beginning of what the technology can do. For example, contract analysis automation solutions can read contracts in seconds, highlight key information visually with easy-to-read graphs and charts, and get “smarter” with each contract reviewed. Other tools use AI to scan legal documents, case files and decisions to predict how courts will rule in tax decisions.

In fact, the use of AI in the legal industry has been around for years, according to Sherry Askin, CEO of Omni Software Systems. Askin has deep roots in the AI field, including work with IBM’s Watson.

“AI is all about increasing efficiency, and is being touted as the next revolution,” she said. “We’ve squeezed as much as we can from human productivity through automation. The next plateau from productivity and the next threshold is AI.”

Why Machine Learning Is Critical

Law is all about words, natural language and the coded version of an unstructured version, said Askin. While we know how to handle the coded versions, she explained, the challenge with legal AI is that outputs are so tightly tailored to past results described by their inputs. That’s where machine learning comes in to predict how these inputs might change.

Askin compared machine learning to the process of intellectual development by which children soak up news words, paragraphs, long arguments, vocabulary and, most importantly, context. With deep learning, not only are you inputting data, but you’re giving the machine context and relevance.

“The machine is no longer a vessel of information,” Askin explained. “It figures out what to do with that information and it can predict things for you.”

Although machines can’t make decisions the same way that humans can, the more the neural processing and training they conduct, the more sophisticated their learning and deliverables can become. Some legal AI tools can process and analyze thousands of lease agreements, doing in seconds what humans would do in weeks.

How Do Privacy Regulations Impact Legal Firms?

For any industry, protecting privileged client data is a paramount concern. The American Bar Association, which requires practitioners to employ reasonable efforts to prevent unauthorized access to client data, has implemented periodic changes and updates to address the advances of technology. In addition, the Legal Cloud Computing Association (LCCA) issued 21 standards to assist law firms and attorneys in addressing these needs, including testing, limitations on third-party access, data retention policy, encryption, end user authentication and modifications to data.

Askin urged legal organizations to evaluate strategies impacting security and privacy in the context of what they modify or replace.

“I believe this is a major factor in legal because the profession has a deep legacy of expert-led art,” she said. “Traditional IT automation solutions perform best with systematized process and structured data. Unfortunately, systematization and structure are not historically compatible with the practice of law or any other professional disciplines that rely on human intelligence and dynamic reasoning.”

How to Keep Legal AI Tools in the Right Hands

Legal organizations are tempting targets for malicious actors because they handle troves of sensitive and confidential information. Rod Soto, director of security research for Jask, recommended several key strategies: employ defense in depth principles at the infrastructure level, train personnel in security awareness and use AI to significantly enhance security posture overall. To protect automated operations conducted by AI, Soto warned, we must understand that while these AI systems are trained to be effective, they can also be steered off course.

“Malicious actors can and will approach AI learning models and will attempt to mistrain them, hence the importance of feedback loops and sanity checks from experienced analysts,” he said. “You cannot trust AI blindly.”

Finally, it’s crucial for legal organizations to understand that AI does not replace a trained analyst.

“AI is there to help the analyst in things that humans have limitations, such as processing very large amounts of alarms or going through thousands of events in a timely manner,” said Soto. “Ultimately, it is upon the trained analyst to make the call. An analyst should always exercise judgment based on his experience when using AI systems.”

Because the pressure to transform is industrywide, profound changes are taking shape to help security experts consistently identify the weakest link in the security chain: people.

“It’s nearly impossible to control all data and privacy risks where decentralized data and human-managed processes are prevalent,” Askin said. “The greater the number of endpoints, the higher the risk of breach. This is where the nature of AI can precipitate a reduction in security and privacy vulnerabilities, particularly where prior IT adoption or data protection practices were limited.”

More from Artificial Intelligence

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today