· 9 min read

Risk Management for Legal AI Solutions

Every mature organization has a risk management program in place. What do these programs say about the use of AI solutions?

How can risk management be applied to legal AI solutions?

Last week, the e-discovery company Casepoint revealed that it had been the victim of a severe ransomware attack. The company, whose customers include numerous law firms and US government organizations with highly-sensitive information, suffered exfiltration of terabytes of their customers’ files. While data breaches are becoming increasingly common, it’s especially noteworthy when such sensitive or privileged information is involved.

Risk management, particularly with respect to information security, has long been an integral part of mature leadership. Within the legal industry, elements like privilege, ethical obligations, and firm reputation make attention to risk even more essential. High-profile events like the Panama Papers investigation that embroiled Mossack Fonseca have served as a clear example of what can happen to legal organizations when things do go wrong.

While the introduction of AI solutions and tools introduces new risk vectors (and makes others more efficient for bad actors), for the most part, the related risk management process remains the same.

There are numerous frameworks for risk management, such as ISO 31000, the Committee of Sponsoring Organizations of the Treadway Commission enterprise risk management (COSO ERM) framework, and the National Institute of Standards and Technology (NIST) Cybersecurity Framework. Regardless of the specific framework used, most risk management includes the following elements:

  1. Identification
  2. Assessment
  3. Treatment (A-M-T-A)
  4. Response Plan
  5. Training
  6. Improvement

How does AI-enabled legal tech relate to this? Let’s take a look at how the procurement and use of these tools in the legal industry fits into a strong risk management framework.

Identification

The risk identification stage involves identifying potential risks that could impact a firm’s operations and reputation. With respect to legal AI solutions, risk identification should consider both risks specific to a particular tool (e.g., hallucinations in the use of a generative AI product) and more broadly (e.g., processing of privileged documents). Firms should consider risks across a range of categories, including ethical risks, data quality, model explainability and transparency, legal and regulatory compliance, and financial risks.

Some particularly salient risks related to legal AI solutions include violation of ethical obligations (such as the Model Rules of Professional Conduct in the United States), loss of privilege, unauthorized access to client information, reputational damage, and financial loss.

Data Flow Mapping

An important step in the risk identification process is data flow mapping. The data flow mapping process involves identifying and documenting the movement of data through an organization’s network, from its origin to its final destination. When considering legal AI tools, this process can help identify situations where sensitive data is being collected, stored, or transmitted in an insecure manner (or to an inappropriate third party), potentially exposing it to unauthorized access or data breaches.

Data flow mapping can also help organizations comply with data protection and privacy laws, such as the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA), by demonstrating that they are appropriately managing personal data and maintaining data privacy. The mapping process also allows firms to see what jurisdictions are involved in the flow of data, giving them the ability to better understand regulatory requirements and the related risks to be considered.

Assessment

Once risks have been identified, the next step is to assess their likelihood and potential impact on the firm. This can be done through risk modeling and analysis to determine the probability of a risk occurring and its potential impact.

Treatment

After assessing the risks related to AI solutions, firms need to determine what type of treatment is appropriate. Many factors contribute to this decision, including the firm’s overall risk tolerance, the results of the risk assessment process, and the existing control environment. The possible treatments of risk are:

  1. avoidance
  2. mitigation
  3. transfer
  4. acceptance

Avoidance

To address risks related to AI solutions, a firm might decide to avoid the tool or product entirely. While this addresses the risks related to using AI, it introduces risks related to not using the tools, such as loss of market share, increased costs relative to competitors, and human error.

Firms might instead take a partial avoidance approach, limiting the use of AI solutions to non-client data or public data only. For example, a firm may opt to use AI solutions related to public court filings, but prohibit use of the product with any client data or documents. This approach also falls under the “mitigation” treatment to the extent that may involve establishing internal policies related to the tools or development of compensating controls.

Mitigation

Mitigating risks is the most common treatment, as it allows firms to reduce the likelihood and/or severity of risks to an acceptable level. Implementation of mitigation measures likely include both technical controls (such as a firewall) and governance controls (such as policies and procedures or board oversight).

Many organizations, including law firms, have seen the importance of developing internal guidance for the use of AI tools. In the early days of ChatGPT, Samsung employees disclosed trade secrets through their use of the tool; in response, Samsung banned all generative AI tools until it could establish appropriate controls.

Company policies relating to appropriate use AI may fall anywhere on the “allowance” spectrum: from a strict prohibition to permissive use. Brightline rules might be used, such as those related to the types of documents that can be uploaded. For example, the policy may state that only public documents may be used, while client documents are strictly off-limits. Alternatively, firms may decide to limit the use of AI-enabled systems to a certain subset of firm personnel, such as trained users or groups (e.g., library services) who can access the technology to mitigate related risks.

Relying solely on policies is not enough, as people are often the weakest link when it comes to security. Therefore, implementing technical guardrails can help to ensure that the desired outcome is achieved. Limiting the use of AI solutions to those that operate entirely within the firm’s control environment can significantly mitigate risks related to information security, as the storage, transmission, and processing of client or firm data occurs within the firm’s existing technical infrastructure. We developed Kelvin Legal Data OS to be fully functional on a firm’s own infrastructure (or their approved cloud infrastructure in their own environments) to avoid introducing additional security and data protection risks.

Transfer

Transfer of risk often occurs through the use of insurance or contractual obligations, such as indemnification. Indemnification and limitations of liability vary by AI tool - from indemnification of the AI company only and no limitation liability to no indemnification to bilateral indemnification and a liability cap based on a set number of months’ fees. Given the variability, it’s important to understand whether transfer of risk related to the use of a specific technology is a viable risk treatment option.

It’s well-accepted that cyber insurance coverage has become increasingly limited. Coverage limitations and exclusions are on the rise, and following a cyberattack, many companies have found that their policy doesn’t actually cover the event. Given the uncertainty surrounding many legal aspects of AI (including copyright, data protection, and explainability and transparency), it may be difficult for firms to find insurers who are willing to offer policies that transfer this risk.

Acceptance

In some cases, firms may choose to accept the risk and its consequences. This may be because the cost of mitigating the risk or transferring it to another party is too high, or because the potential loss from the risk is too low to justify mitigation efforts.

Response Plan

In some cases, despite implementing mitigation strategies, risks may still materialize. A well-defined response and contingency plan is necessary to manage these situations effectively and minimize the impact on the firm.

In order to meet data protection obligations and reduce reputational harm, it’s essential that firms have mapped their data flows (particularly sensitive and/or client data) throughout their systems and all external systems and vendors. Having a mapped flow of data allows firms to know what information has been implicated and more easily determine how the compromise occurred.

Another important element for firms to consider when using AI-enabled legal tools is their business continuity plan in the case of limited or completely terminated use of the tool. The regulatory environment around AI is constantly evolving; the European Council recently approved an updated draft of the AI Act, which would regulate AI systems (including extraterritorially to non-EU companies whose AI output is used in the EU). The impact of growing legislation and regulation could lead to AI-providers limiting their jurisdictional availability, which could have a direct or indirect impact on the legal AI solutions that are available to firms, particularly those that have a global footprint.

The current hurdle for many firms using legal AI tools is availability: many firms, even those who have spent hundreds or even millions of dollars with certain cloud providers, have found that some tools are only available intermittently, are rate-limited, or are not available for months. This type of availability uncertainty should be considered and addressed as part of firms’ AI strategies.

Training

The best policies and practices won’t help if firm personnel are not trained to appropriately use AI solutions. All members of the organization should be aware of the risks and their responsibilities in managing these risks. Training should enable personnel who use the AI-enabled tools to gain an understanding of how the tool works, risks associated with the tool (as well as ways to mitigate or treat these risks), limitations of the tool and its output, and all relevant contractual, ethical, and legal/regulatory obligations. In addition, training should include review of the policies and procedures that relate to legal AI solutions.

Improvement

Effective risk management is a continuous process, not a one-time exercise. While this is true of risk management in general, it’s even more essential for AI, given the rate of change (both technically and regulatorily). Firms should regularly evaluate their risk management process, identify areas for improvement, and implement changes to enhance its effectiveness. Changes in legal AI solutions, obligations, and known security risks, as well as results of the firms’ internal control effectiveness should all be accounted for in making continuous improvements to the controls, policies, and overall AI strategy.

What’s Next?

The use of AI-enabled legal tools is becoming more prevalent, and the benefits of these tools are clear. However, firms must be aware of the risks associated with these tools and take steps to mitigate them.

Thankfully, we don’t need to re-invent the wheel. By applying the same risk management frameworks that we use for other technologies, we can effectively manage the risks associated with AI-enabled legal tools.


Author Headshot

Jillian Bommarito, CPA, CIPP/US/E

Jillian is a Co-Founding Partner at 273 Ventures, where she helps ensure that Kelvin is developed and implemented in a way that is secure and compliant.

Jillian is a Certified Public Accountant and a Certified Information Privacy Professional with specializations in the United States and Europe. She has over 15 years of experience in the legal and accounting industries.

Would you like to learn more about risk management for AI-enabled legal tools? Send your questions to Jillian by email or LinkedIn.

Back to Blog