Last week’s blog post looked at several reasons why litigators should approach generative artificial intelligence tools with caution. They have an unsettling capacity for error. Decisions made by automated tools can be biased in ways that are not readily apparent. AI tools can compromise client confidential information if carelessly deployed. And, let’s face it, only a few people on the planet can confidently explain how artificial intelligence works. When it comes to generative AI, the watchwords are “handle with care.”

With these problems in mind, most of us justifiably believe that artificial technology regulation is on the horizon. However, it’s important not to forget that there is already on the books a substantial body of law relevant to the use of artificial intelligence in the delivery of legal services. This week’s blog post examines, at a high level, some of those laws.

Ethical Dimensions of Artificial Intelligence

For litigators, there are two broad areas of interest relating to artificial intelligence: the ethical use of AI tools when delivering legal services, and the use in judicial proceedings of evidence created in whole or in part by artificial intelligence.

In the area of professional ethics, lawyers using artificial intelligence technology should be concerned about their duties of competence, client communication, protecting client information, supervision, and avoidance of bias.

A quick review of the American Bar Association’s Model Rules of Professional Conduct provides a useful introduction to the ethical issues raised by artificial intelligence in the delivery of legal services.

The duty of competence (Rule 1.1). Lawyers must provide competent representation to his or her client. This duty encompasses having the knowledge and skill to use new technologies such as artificial intelligence competently on the client’s behalf. According to legal commentator David Lat, “[i]f a lawyer uses a tool that suggests answers to legal questions, he must understand the capabilities and limitations of the tool and the risks and benefits of those answers.”

One limitation of current iterations of generative AI is its tendency to “hallucinate”; meaning, to occasionally invent facts in response to a query. Just last week the New York Times reported the case of a lawyer who used ChatGPT for legal research on a disputed statute of limitations question in a personal injury case. ChatGPT cited several entirely fictional court rulings, which the lawyer included in his brief without checking their authenticity. When opposition counsel were unable to locate the court rulings, they brought the matter to the trial judge’s attention.

The duty to communicate (Rule 1.4). Lawyers are obliged by Rule 1.4 to discuss with clients the means by which legal services will be provided – including the decision to use (or not use) artificial intelligence when providing legal services.

The duty of confidentiality (Rule 1.6). Lawyers have an ethical obligation to make reasonable efforts to prevent the inadvertent or unauthorized disclosure of client confidential information. In the context of using artificial intelligence tools, this duty necessarily implies an obligation to discuss with technology vendors the data security safeguards in place to protect client information, as well as training to ensure that the technology is used in a manner that does not create an unreasonable risk of compromising client confidentiality.

The duty to supervise (Rules 5.1, 5.3). Lawyers must supervise both lawyers and non-lawyers to ensure that legal services are delivered in conformity with the Rules of Professional Conduct. A 2012 amendment to Rule 5.3 clarified that the term “non-lawyers” includes non-human entities such as artificial intelligence technologies.

The duty to avoid bias (Rule 8.4). In jurisdictions that have adopted it, Rule 8.4(g) forbids lawyers from engaging in conduct that discriminates on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status, or socioeconomic status. Lawyers must consider whether their use of artificial intelligence technologies is discriminatory.

Artificial Intelligence in Court

The second area of concern for litigators with respect to artificial intelligence is the use of AI-generated evidence in judicial proceedings. Artificial intelligence can create both “good” and “bad” evidence that will someday be offered into evidence in court.

An example of “bad” AI-generated evidence are so-called deepfake images, which may prove difficult for jurors to distinguish from accurate photographs. And someday soon litigators may routinely offer “good” AI-generated evidence in civil and criminal matters. Artificial intelligence can create compelling summaries of raw data, or even make assessments of a witness’s credibility based on video recordings. No rule of evidence (as yet) specifically addresses these types of AI-generated evidence. However, several relevant evidentiary rules are already on the books.

Decisions relating to the admission of AI-generated evidence would seem to require the resolution of these foundational questions:

  1. Is the evidence relevant? AI-generated evidence will have to meet the threshold consideration of relevance, as well as the requirement that its probative value outweighs any tendency to create unfair prejudice or confusion.
  2. Can the evidence be authenticated? Authentication may prove to be a stumbling block for proponents of AI-generated evidence. Testimony explaining how supporting data was collected and processed, how the computer algorithms that underpins the AI operates and was trained, and how the AI avoids bias and error are all necessary to authenticate AI-generated evidence.
  3. Does the evidence meet the threshold for admission of expert evidence? In federal courts, Rule of Evidence 702 and the U.S. Supreme Court’s decision in Daubert v. Merrell Dow Pharmaceuticals Inc., 509 U.S. 579 (1993), require that evidence dealing with specialized knowledge meet a rigorous admissibility test. Litigators seeking to introduce AI-generated evidence will have a difficult time meeting Daubert’s admissibility threshold.

Lawyers are only now beginning to work through these (and other) legal issues relating to AI-generated evidence. One early scholarly effort is Artificial Intelligence as Evidence, 19 NW. J. TECH. & INTELL. PROP. 9 (2021), written by noted electronic evidence expert and federal trial judge Paul W. Grimm.

Looking Ahead

The recent emergence of generative artificial tools like ChatGPT captured the imagination of the legal community, stimulating considerable discussion about how these technologies can be used in litigation and other legal services. It’s only a matter of time before AI-enhanced legal services will become the norm. Until state bar regulators enact AI-specific ethical obligations for lawyers and courts to weigh in on the admissibility of AI-generated evidence, lawyers will proceed as they always have: by applying scholarship and professional judgment to make old rules govern new circumstances.