September 11, 2018 By Mark Stone 4 min read

Law firms tasked with analyzing mounds of data and interpreting dense legal texts can vastly improve their efficiency by training artificial intelligence (AI) tools to complete this processing for them. While AI is making headlines in a wide range of industries, legal AI may not come to mind for many. But the technology, which is already prevalent in the manufacturing, cybersecurity, retail and healthcare sectors, is quickly becoming a must-have tool in the legal industry.

Due to the sheer volume of sensitive data belonging to both clients and firms themselves, legal organizations are in a prickly position when it comes to their responsibility to uphold data privacy. Legal professionals are still learning what the privacy threats are and how they intersect with data security regulations. For this reason, it’s critical to understand security best practices for operations involving AI.

Before tackling the cybersecurity implications, let’s explore some reasons why the legal industry is such a compelling use case for AI.

How Do Legal Organizations Use AI?

If you run a law firm, imagine how much more efficient you could be if you could train your software to recognize and predict patterns that not only improve client engagement, but also streamline the workflow of your legal team. Or what if that software could learn to delegate tasks to itself?

With some AI applications already on the market, this is only the beginning of what the technology can do. For example, contract analysis automation solutions can read contracts in seconds, highlight key information visually with easy-to-read graphs and charts, and get “smarter” with each contract reviewed. Other tools use AI to scan legal documents, case files and decisions to predict how courts will rule in tax decisions.

In fact, the use of AI in the legal industry has been around for years, according to Sherry Askin, CEO of Omni Software Systems. Askin has deep roots in the AI field, including work with IBM’s Watson.

“AI is all about increasing efficiency, and is being touted as the next revolution,” she said. “We’ve squeezed as much as we can from human productivity through automation. The next plateau from productivity and the next threshold is AI.”

Why Machine Learning Is Critical

Law is all about words, natural language and the coded version of an unstructured version, said Askin. While we know how to handle the coded versions, she explained, the challenge with legal AI is that outputs are so tightly tailored to past results described by their inputs. That’s where machine learning comes in to predict how these inputs might change.

Askin compared machine learning to the process of intellectual development by which children soak up news words, paragraphs, long arguments, vocabulary and, most importantly, context. With deep learning, not only are you inputting data, but you’re giving the machine context and relevance.

“The machine is no longer a vessel of information,” Askin explained. “It figures out what to do with that information and it can predict things for you.”

Although machines can’t make decisions the same way that humans can, the more the neural processing and training they conduct, the more sophisticated their learning and deliverables can become. Some legal AI tools can process and analyze thousands of lease agreements, doing in seconds what humans would do in weeks.

How Do Privacy Regulations Impact Legal Firms?

For any industry, protecting privileged client data is a paramount concern. The American Bar Association, which requires practitioners to employ reasonable efforts to prevent unauthorized access to client data, has implemented periodic changes and updates to address the advances of technology. In addition, the Legal Cloud Computing Association (LCCA) issued 21 standards to assist law firms and attorneys in addressing these needs, including testing, limitations on third-party access, data retention policy, encryption, end user authentication and modifications to data.

Askin urged legal organizations to evaluate strategies impacting security and privacy in the context of what they modify or replace.

“I believe this is a major factor in legal because the profession has a deep legacy of expert-led art,” she said. “Traditional IT automation solutions perform best with systematized process and structured data. Unfortunately, systematization and structure are not historically compatible with the practice of law or any other professional disciplines that rely on human intelligence and dynamic reasoning.”

How to Keep Legal AI Tools in the Right Hands

Legal organizations are tempting targets for malicious actors because they handle troves of sensitive and confidential information. Rod Soto, director of security research for Jask, recommended several key strategies: employ defense in depth principles at the infrastructure level, train personnel in security awareness and use AI to significantly enhance security posture overall. To protect automated operations conducted by AI, Soto warned, we must understand that while these AI systems are trained to be effective, they can also be steered off course.

“Malicious actors can and will approach AI learning models and will attempt to mistrain them, hence the importance of feedback loops and sanity checks from experienced analysts,” he said. “You cannot trust AI blindly.”

Finally, it’s crucial for legal organizations to understand that AI does not replace a trained analyst.

“AI is there to help the analyst in things that humans have limitations, such as processing very large amounts of alarms or going through thousands of events in a timely manner,” said Soto. “Ultimately, it is upon the trained analyst to make the call. An analyst should always exercise judgment based on his experience when using AI systems.”

Because the pressure to transform is industrywide, profound changes are taking shape to help security experts consistently identify the weakest link in the security chain: people.

“It’s nearly impossible to control all data and privacy risks where decentralized data and human-managed processes are prevalent,” Askin said. “The greater the number of endpoints, the higher the risk of breach. This is where the nature of AI can precipitate a reduction in security and privacy vulnerabilities, particularly where prior IT adoption or data protection practices were limited.”

More from Artificial Intelligence

How prepared are you for your first Gen AI disruption?

5 min read - Generative artificial intelligence (Gen AI) and its use by businesses to enhance operations and profits are the focus of innovation in virtually every sector and industry. Gartner predicts that global spending on AI software will surge from $124 billion in 2022 to $297 billion by 2027. Businesses are upskilling their teams and hiring costly experts to implement new use cases, new ways to leverage data and new ways to use open-source tooling and resources. What they have failed to look…

Brands are changing cybersecurity strategies due to AI threats

3 min read -  Over the past 18 months, AI has changed how we do many things in our work and professional lives — from helping us write emails to affecting how we approach cybersecurity. A recent Voice of SecOps 2024 study found that AI was a huge reason for many shifts in cybersecurity over the past 12 months. Interestingly, AI was both the cause of new issues as well as quickly becoming a common solution for those very same challenges.The study was conducted…

Does your business have an AI blind spot? Navigating the risks of shadow AI

4 min read - With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today