Egregious Errors in Expert Evidence: Ethical Oversight for Experts Who Use Generative AI

By Tyler J. Brewster

Introduction

“The irony,” proclaimed Judge Laura M. Provinzino, United States District Judge for the District of Minnesota, as she ruled on a motion to exclude expert testimony in Kohls v. Ellison.[1] One can only imagine that lawyers in the Minnesota Attorney General’s Office shuddered as they read those words, knowing then that the court would exclude their expert’s testimony.[2] As Judge Provinzino aptly identified, the situation is ironic; the expert—“a credentialed expert on the dangers of AI [Artificial Intelligence] and misinformation”—tainted his own testimony with non‑existent, AI‑generated citations.[3]­­­

In the eyes of the court, the expert’s credibility was shattered; no correction or apology would be enough to redeem his testimony.[4] Judge Provinzino’s admonishment, however, was not limited to the offending expert; the court scolded the proffering attorneys as well.[5] Despite the Minnesota Attorney General’s Office sincerely apologizing for their good-faith failure to catch the “unintentional fake citations in the [expert’s] Declaration,” the court warned of the “personal, nondelegable responsibility” imposed by Federal Rule of Civil Procedure 11.[6] Per Rule 11, lawyers must “validate the truth and legal reasonableness of the papers filed.”[7] Clearly, in the age of AI, blindly accepting and submitting an expert’s report or testimony will not suffice.[8]

As the use of generative AI in expert testimony grows more prevalent, courts and lawyers must develop robust oversight mechanisms and enforcement strategies to address the complex ethical, professional, and practical challenges this technology presents. Recent opinions like Kohls v. Ellison highlight the potential risks and consequences of unchecked AI use in litigation.[9] A balanced approach combining lawyer‑led safeguards and updated legal standards can help ensure experts responsibly integrate generative AI into their testimony.

I. A Lawyer’s Ethical and Professional Responsibilities

Generally, lawyers litigating in federal court have several ethical and professional responsibilities regarding generative AI.[10] Both the Federal Rules of Civil Procedure and rules of professional conduct impose duties for lawyers using experts that utilize generative AI.[11] When derived from the rules of professional conduct, the exact nature and interpretation of these responsibilities vary from state to state, but there are common principles that transcend jurisdictional bounds.[12] To meet their ethical and professional responsibilities, lawyers must consider, among others, the duties of competence, verification of expert testimony, candor to the tribunal, and informing clients about generative AI use.[13]

A. Duties of Competence & Diligence

One of the primary ethical duties lawyers must uphold is competence, particularly when overseeing expert testimony involving AI.[14] Per Model Rule 1.1, attorneys must “provide competent representation to a client,” including “the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”[15] This duty extends to understanding technologies their experts employ, evaluating the technology’s reliability, and preventing errors.[16] With generative AI use becoming more pervasive, lawyers must understand and forestall potential AI hallucinations and fabricated citations in expert testimony.[17] As Kohls demonstrates, courts increasingly expect lawyers to implement reasonable safeguards against AI‑related errors, regardless of an expert’s credentials.[18]

B. Duty to Verify Expert Testimony

Federal Rule of Civil Procedure 11 imposes verification obligations on attorneys submitting materials to federal courts.[19] As Judge Provinzino emphasized in Kohls, Rule 11 creates a “personal, nondelegable responsibility” for attorneys to “validate the truth and legal reasonableness of the papers filed” in an action.[20] This obligation likely now extends beyond the attorney’s own work product to encompass materials prepared by retained experts.[21] While courts have traditionally permitted attorneys to rely on expert opinions without extensive verification, generative AI’s prevalence may have altered this custom.[22]

The “inquiry reasonable under the circumstances” standard established by Rule 11(b)  may be evolving in response to AI‑related challenges.[23] As Kohls suggests, this inquiry “may now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.”[24] This represents a significant shift in verification expectations, requiring proactive investigation and prohibiting the assumption of reliability based on an expert’s credentials.[25]

C. Duty of Candor

Beyond verifying expert testimony, lawyers must also adhere to strict standards of candor, ensuring the court receives truthful and accurate representations.[26] Several rules in the Model Rules address a lawyer’s obligation of truthfulness.[27] Because lawyers must “make reasonable efforts to ensure that the [employed nonlawyer’s] conduct is compatible with the professional obligations of the lawyer,” each of the applicable rules must be considered when dealing with an expert’s generative AI use.[28] Rule 3.1 prohibits a lawyer from “bring[ing] or defend[ing] a proceeding, or assert[ing] or controvert[ing] an issue therein, unless there is a basis in law and fact for doing so that is not frivolous.”[29] Likewise:

A lawyer shall not knowingly: (1) make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal . . . or (3) offer evidence that the lawyer knows to be false. If a lawyer, the lawyer’s client, or a witness called by the lawyer, has offered material evidence and the lawyer comes to know of its falsity, the lawyer shall take reasonable remedial measures, including, if necessary, disclosure to the tribunal.[30]

While Model Rule 3.3 requires a lawyer to knowingly submit the falsity, Model Rule 1.3 requires the lawyer to “act with reasonable diligence . . . in representing a client.”[31] Applying these rules to the generative AI context, attorneys must exercise diligence and verify that expert testimony does not contain hallucinated citations or facts.

D. Duty to Communicate with Clients About the Use of AI

Model Rule 1.4 imposes a duty on lawyers to communicate with their clients.[32] Specifically, “a lawyer shall reasonably consult with the client about the means by which the client’s objectives are to be accomplished.”[33] Depending on the circumstances, a lawyer may need to inform clients of their generative AI practices “or obtain informed consent to use a particular generative AI tool.”[34] However, at a minimum, lawyers likely must inform clients when experts rely heavily on generative AI so that the client may make informed decisions.[35]

II. Potential Disciplinary Consequences

In Kohls, Judge Provinzino ultimately excluded the expert’s testimony and denied leave to file an amended expert declaration.[36] Considering Judge Provinzino repeatedly highlighted the penalty of perjury and alluded to a lawyer’s obligation to verify submissions to the court, the Minnesota Attorney General’s Office and its expert emerged relatively unscathed.[37] Lawyers who fail to meet their ethical obligations to supervise experts, verify submissions, or inform their clients might also face a slew of other consequences, ranging from Rule 11 sanctions, contempt, disciplinary actions, and even to malpractice claims.[38]

A. Rule 11 Sanctions

As discussed supra, Federal Rule of Civil Procedure 11 imposes an obligation on lawyers to perform a reasonable inquiry under the circumstances as to whether introduced expert testimony is credible and based in law or fact.[39] When attorneys fail to meet this standard, Rule 11 permits courts to “impose an appropriate sanction on any attorney, law firm, or party that violated the rule or is responsible for the violation.”[40] The court may also hold a law firm “jointly responsible for a violation committed by its partner, associate, or employee.”[41] Given the attorney’s responsibility to perform a reasonable inquiry before certifying that “the factual contentions [submitted to the court] have evidentiary support,” the court is most likely to employ this enforcement mechanism.[42]

B. Contempt

Increasingly, federal district courts have been implementing rules regulating generative AI use.[43] Both state and federal court systems have taken varying approaches—some merely require disclosure to the court, others may impose restrictions on its use, and a portion prohibits its use altogether.[44] When lawyers violate these local rules or ignore other directives from federal judges, judges may hold them in contempt of court.[45] Even where local rules do not explicitly extend to an expert’s use of AI, lawyers may be wise to apply the directives in local rules to experts used in the litigation process, thereby ensuring compliance with local rules.[46]

C. Disciplinary Actions

As every lawyer is well aware, courts with the authority to admit attorneys may also dole out consequences in response to misconduct by using its disciplinary proceeding power against any lawyer that violates the rules of professional conduct.[47] Lawyers must pay particular attention to evolving ethics rules about generative AI—new advisory and disciplinary opinions will continue to shape the interpretation of a given state’s rules.[48] A failure to adequately apply existing ethics rules to emerging technology like generative AI may subject lawyers to varying degrees of discipline.[49]

Legal news outlets are rife with examples of lawyers using hallucinated citations and courts disciplining them for it.[50] These examples serve as a poignant reminder: a lawyer must check citations—or else.[51] Despite recurrent warnings, lawyers across the nation continue to earn themselves the “ChatGPT lawyer” label.[52]

For now, improper generative AI use by lawyer‑retained experts may be flying under the radar because of a lack of attorney discipline for such misconduct.[53] But as generative AI use becomes more pervasive across every industry, opposing counsel may become more scrutinizing of—and courts less lenient toward—expert testimony and reports created with generative AI.[54] Until bar guidance indicates otherwise, lawyers ought to mitigate disciplinary risk by conducting rigorous review of their expert’s work product, including verifying all cited sources.

D. Malpractice

If the threat of attorney discipline is not enough, the threat of a malpractice damage award may be the last deterrent.[55] Where a lawyer’s irresponsible employment of a generative AI-using expert leads to a lost or diminished verdict, the lawyer’s clients may be able to recover for these harms.[56] Just like other malpractice suits, the court must determine whether the lawyer has breached a duty to their client and caused harm.[57] A lawyer whose case depends on admissible expert testimony must take reasonable steps to ensure that said expert testimony remains admissible.[58] Should the lawyer unreasonably fail to safeguard against their expert’s generative AI use, the expert testimony may be excluded as in Kohls.[59] It is not inconceivable that such an error could place the case in hot water, particularly in cases using highly specialized experts or those which rely on only one expert because of client budget concerns.[60] Lawyers would be wise to take all reasonable measures to ensure that courts view their experts as credible by preventing their experts from making generative AI blunders.[61]

III. Practical Solutions for Responsible AI Use

Despite potential risks, generative AI is revolutionizing industry.[62] Even when lawyers swear off the technology, a lawyer cannot guarantee their own experts will not use it while preparing for litigation. Therefore, lawyers must begin to incorporate measures to ensure client outcomes and compliance with all ethical and professional responsibilities. By taking simple precautions, lawyers can be sure to adhere to their professional responsibilities, which serve to protect their clients and the integrity of the judicial process.

A. Direct Verification by Lawyers and Staff

The simplest way to fulfill the implicated duties when submitting AI‑generated or ‑assisted expert materials is to independently verify information.[63] To meet their ethical obligations, lawyers should—at a minimum—inquire whether their experts used generative AI while preparing their statements to the court and check the existence and accuracy of their expert’s citations before submitting such testimony.[64] This method, while straightforward, has its logistical downsides. The time and manpower necessary to verify the accuracy of expert reports may be cost‑prohibitive for already cash‑strapped clients and overworked lawyers. The responsible litigator must not take shortcuts either; asking a generative AI tool to double check AI‑created expert testimony may create a circular problem. If a lawyer is unwilling to verify the testimony, they may need to explore other options.

B. Use of Secondary or Tertiary Experts

For lawyers and clients with deep pockets, it may make sense to hire one or more additional experts to verify expert testimony before submission to the court. Although this creates what some may consider duplicative costs, such a practice would take the verification onus off lawyers and their staff while fulfilling the duty to reasonably verify.[65] These backup experts would likely have greater ease verifying accuracy because of familiarity with common sources and subject matter. There is risk, however, that the same experts tasked with mitigating generative AI risk will misuse the technology themselves.

C. Contractual Agreements

Contract may be another way to limit risk when employing experts who use generative AI. By addressing generative AI use in their engagement contracts, lawyers open the door to conversations with experts about their expectations, ethical obligations, and client‑accepted risk level. Lawyers may also vary terms to prohibit, limit, or expressly permit an expert’s generative AI use. As with all contracts, lawyers should seek to properly and adequately define all technical terms while keeping definitions broad enough to contemplate further technological advancement.

Such agreements could provide lawyers with liquidated damages or contractual causes of action if the expert fails to meet their contract‑defined duties regarding generative AI.[66] Through thoughtful contracting, lawyers can (1) impress the importance of responsible generative AI use upon their experts; (2) reassure clients about potential generative AI‑related risks; and (3) provide themselves a basis of recovery if their experts’ mistakes or improper AI use are imputed to them. After all, experts charge fees for a reason—should lawyers not be able to expect a credible work product?

Conclusion

Although the American Bar Association has issued recent guidance on ethical generative AI use, more state and nationwide guidance is necessary to ensure lawyers adequately supervise their experts.[67] At this time, lawyers can rely on their experts’ opinions when they have no reason to doubt their experts’ credibility, unless the expert uses generative AI.[68] As Kohls illustrates, even the most facially credible experts may fail to responsibly use generative AI.[69] There may even be a point where generative AI use becomes so pervasive that a “reasonable inquiry” into an expert’s work product requires lawyers to verify the factual basis for their expert’s opinions, as well as the opinions themselves.[70]

However, the future may also bring new and credible AI‑based tools for lawyers to deploy against potential AI‑hallucinated expert testimony.[71] Time will tell—with any emerging technology or industry, the legal community faces new challenges and opportunities to adapt.[72] While lawyers adjust to the generative AI landscape, they must be ever cognizant of rapidly evolving technology, law, and ethical guidelines.[73] Their licenses may depend on it.

 

[1] Kohls v. Ellison, No. 24-cv-3754, 2025 WL 66514, at *3 (D. Minn. Jan. 10, 2025).

[2] See id. at *3–5.

[3] Id. at *3.

[4] Id. at *4.

[5] Id.

[6] Id. (citing Pavelic & LeFlore v. Marvel Ent. Grp., 493 U.S. 120, 126–27 (1989)).

[7] Id.

[8] Id. at *4–5 (“The Court suggests that an ‘inquiry reasonable under the circumstances,’ may now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.”); see also Coffey v. Healthtrust, Inc., 1 F.3d 1101, 1104 (10th Cir. 1993) (citing Bus. Guides, Inc. v. Chromatic Commc’ns Enters., Inc., 498 U.S. 533 (1991)) (“The attorney has an affirmative duty to inquire into the facts and law before filing a pleading. His inquiry must be reasonable under the circumstances.”).

[9] Kohls, 2025 WL 66514, at *8; see also Mata v. Avianca, Inc., 678 F.Supp. 3d 443 (S.D.N.Y. 2023).

[10] See generally ABA Comm. On Ethics & Pro. Resp., Formal Op. 512 (2024) (describing Model Rules of Professional Conduct that may be implicated when lawyers use generative AI tools).

[11] See Fed. R. Civ. P. 11; see also ABA Comm. On Ethics & Pro. Resp., Formal Op. 512.

[12] See 8 Federal Procedure, Lawyer’s Edition § 20:215 (2025) (The Model Rules of Professional Conduct “serve as models for the ethics rules of most states.”). This writing uses the Model Rules of Professional Conduct as many states adopt similar rules. Id. Lawyers should carefully review the rules of professional conduct for the states in which they practice.

[13] See generally ABA Comm. On Ethics & Pro. Resp., Formal Op. 512.

[14] Id. at 3.

[15] Model Rules of Pro. Conduct r. 1.1 (Am. Bar Ass’n 2023)(emphasis added).

[16] See ABA Comm. On Ethics & Pro. Resp., Formal Op. 512, at 3.

[17] See id. at 3–4.

[18] Kohls v. Ellison, No. 24-cv-3754, 2025 WL 66514, at *3–5 (D. Minn. Jan. 10, 2025).

[19] Fed. R. Civ. P. 1.

[20] Kohls, 2025 WL 66514, at *4 (citing Pavelic & LeFlore v. Marvel Ent. Grp., 493 U.S. 120, 126–27 (1989)).

[21] See id. at *4-5.

[22] See Coffey v. Healthtrust, Inc., 1 F.3d 1101, 1104 (10th Cir. 1993) (“There would seem to be no problem for the attorney to rely on the expert’s opinion as the basis of his client’s position.”).

[23] Fed. R. Civ. P. 11. The “inquiry reasonable under the circumstances” standard is “the subject of considerable argument and disagreement,” but generally “the attorney must have made a serious enough inquiry into the substance of the filing so that the filing is not made for an improper purpose.” 47 Am. Jur. 3d Proof of Facts § 7 (2024).

[24] Kohls, 2025 WL 66514, at *4.

[25] See id. at *4–5.

[26] Model Rules of Pro. Conduct r. 3.1.

[27] See, e.g., id. r. 3.1, 3.3, 3.4, 4.1.

[28] Id. r. 5.3.

[29] Id. r. 3.1.

[30] Id. r. 3.3.

[31] Id. r. 3.3, 1.3.

[32] Id. r. 1.4.

[33] ABA Comm. On Ethics & Pro. Resp., Formal Op. 512, at 8 (2024); see also Model Rules of Pro. Conduct r. 1.4(a)(2).

[34] ABA Comm. On Ethics & Pro. Resp., Formal Op. 512, at 8–9 (providing examples of where a lawyer’s use of generative AI would need the client’s informed consent); see also Model Rules of Pro. Conduct r. 1.4.

[35] See ABA Comm. On Ethics & Pro. Resp., Formal Op. 512, at 8–9.

[36] Kohls v. Ellison, No. 24-cv-3754, 2025 WL 66514, at *5 (D. Minn. Jan. 10, 2025).

[37] Id. at *4–5. Rather than imposing any penalty or sanction, Judge Provinzino only excluded the expert’s testimony and denied leave to file an amended expert declaration. Id. at *5. Fortunately for the attorneys in the Minnesota Attorney General’s Office, the court denied the plaintiff’s Motion for a Preliminary Injunction despite the exclusion of expert testimony. Kohls, 2025 WL 66765, at *5.

[38] See Fed. R. Civ. P. 11; 18 U.S.C. § 401; 7A C.J.S. Attorney & Client § 101 (2024); Harold A. Weston, Lawyers’ Professional Liability Insurance, 92 A.L.R. 5th 273 (2001) (“Lawyers, like other professionals, face liability exposures arising from their acts, errors or omissions in their rendering professional services on behalf of others.”); see also Kohls, 2025 WL 66514, at *5 (The “consequences [of citing AI‑hallucinated citations for attorneys] should be no different for an expert offering testimony to assist the Court under penalty of perjury.”).

[39] Kohls, 2025 WL 66514, at *4 (citing Fed. R. Civ. P. 11(b)).

[40] Fed. R. Civ. P. 11(c)(1).

[41] Id.

[42] Id. at (b), (b)(3); see also Kohls, 2025 WL 66765, at *4.

[43] Andrew M. Perlman, The Legal Ethics of Generative AI, 57 Suffolk U. L. Rev. 345, 355–60 (2024).

[44] Id.

[45] Id. at 356 (quoting Michael J. Newman, Standing Order Governing Civil Cases, S.D. OHIO 11 (Dec. 18, 2023), https://www.ohsd.uscourts.gov/sites/ohsd/files//MJN%20Standing%20Civil%20Order%20eff.%2012.18.23.pdf [https://perma.cc/8CXY-Q6KK]; Michael J. Newman, Standing Order Governing Criminal Cases, S.D. OHIO 5 (Aug. 19, 2024), https://www.ohsd.uscourts.gov/sites/ohsd/files//MJN%20Standing%20Criminal%20Order%208.19.2024.pdf [https://perma.cc/7N8T-XCXM] (“Parties and their counsel who violate this AI ban may face sanctions including, inter alia, striking the pleading from the record, the imposition of economic sanctions or contempt, and dismissal of the lawsuit.”).

[46] See Model Rules of Pro. Conduct r. 5.3 (imposing a duty on lawyers to ensure that nonlawyers are supervised in conformity with a lawyer’s ethical obligations under the Model Rules).

[47] 7A C.J.S. Attorney & Client § 101 (2024).

[48] See ABA Comm. On Ethics & Pro. Resp., Formal Op. 512, at 2 (“It is anticipated that this Committee and state and local bar association ethics committees will likely offer updated guidance on professional conduct issues relevant to specific [generative AI] tools as they develop.”).

[49] 7A C.J.S. Attorney & Client § 101 (“As a general rule, any court authorized to admit an attorney has inherent jurisdiction to discipline, suspend, or disbar the attorney for sufficient cause.”); see Model Rules of Pro. Conduct r. 5.3 (“[A] lawyer shall be responsible for conduct of [a nonlawyer employed or retained by or associated with a lawyer] that would be a violation of the Rules of Professional Conduct if engaged in by a lawyer if: (1) the lawyer orders or, with the knowledge of the specific conduct involved; or (2) the lawyer is a partner or has comparable managerial authority in the law firm in which the person is employed, or has direct supervisory authority over the person, and knows of the conduct at a time when its consequences can be avoided or mitigated but fails to take reasonable remedial action.”).

[50] See, e.g., Michael A. Mora, Lawyer’s Use of Artificial Intelligence Leads to Disciplinary Action, Daily Bus. Rev. (May 15, 2024, 6:37 PM), https://www.law.com/dailybusinessreview/2024/05/15/lawyers-use-of-artificial-intelligence-leads-to-disciplinary-action/?slreturn=20250316-23238; Sara Merken, Another NY Lawyer Faces Discipline After AI Chatbot Invented Case Citation, Reuters (Jan. 30, 2024, 2:42 PM), https://www.reuters.com/legal/transactional/another-ny-lawyer-faces-discipline-after-ai-chatbot-invented-case-citation-2024-01-30/ [https://perma.cc/XAT7-78XE].

[51] Thy Vo, Colo. Atty Suspended for Using ‘Sham’ ChatGPT Case Law, Law360 (Nov. 27, 2023, 4:38 PM), https://www.law360.com/legalethics/articles/1770085 [https://perma.cc/4MTL-C6WW].

[52] The term “ChatGPT lawyer” entered the zeitgeist when an attorney was sanctioned in Mata v. Avianca, Inc. See 678 F.Supp. 3d 443 (S.D.N.Y. 2023).

[53] Currently, AI-related attorney discipline predominantly involves an attorney’s improper AI use. See, e.g., Mora, supra note 50; Merken, supra note 50.

[54] See generally Alysa Taylor, How Real-World Businesses Are Transforming with AI—with 261 New Stories, Off. Microsoft Blog, https://blogs.microsoft.com/blog/2025/03/10/https-blogs-microsoft-com-blog-2024-11-12-how-real-world-businesses-are-transforming-with-ai/ [https://perma.cc/L9YB-THD8] (last updated Apr. 22, 2025).

[55] Weston, supra note 38 (“Lawyers, like other professionals, face liability exposures arising from their acts, errors or omissions in their rendering professional services on behalf of others.”).

[56] Generally, courts reject legal malpractice cases where “the defendant attorney was negligent in failing to effectively present evidence or testimony which would have produced a verdict favorable to the client.” W.E. Shipley, Attorney’s liability for negligence in preparing or conducting litigation, 45 A.L.R. 2d 5 § 16 Evidence or witnesses (1956). However, in certain circumstances, such as in Brock v. Fouchy, courts have stated that when an attorney’s failures deprive their client of the ability to present evidence, an attorney is liable for their negligence if damage results. Id. (citing Brock v. Fouchy, 172 P.2d 945 (Cal. Ct. App. 1946)).

[57] Id. (“The factor which has occasioned most difficulty to clients attempting to charge attorneys with liability for negligence in connection with litigation has been the necessity of proving that the damages claimed resulted from the alleged misconduct.”).

[58] Id. (“[A]n attorney undertaking litigation will be required to exercise that degree of knowledge, care, and diligence which is commonly possessed and exercised by what might be called the ‘reasonably prudent’ attorney under the circumstances.”).

[59] See Kohls v. Ellison, No. 24-cv-3754, 2025 WL 66514, at *5 (D. Minn. Jan. 10, 2025) (excluding an expert’s testimony when it contained hallucinated AI‑generated citations).

[60] See Noel v. Martin, 21 F. App’x 828 (10th Cir. 2001) (holding that the plaintiff’s attorney’s negligence in submitting a late expert disclosure provided grounds to exclude the expert and that summary judgment was proper where plaintiff did not have an expert).

[61] See Kohls, 2025 WL 66514, at *4.

[62] See generally Taylor, supra note 54.

[63] See Kohls, 2025 WL 66514, at *4.

[64] Id. at *4; see Fed. R. Civ. P. 11. Where a client’s informed consent is required to use a generative AI tool—such as when confidentiality may be breached—lawyers must also discuss the expert’s desired use of generative AI tools and potential risks with the expert so that they may get a client’s informed consent. See ABA Comm. On Ethics & Pro. Resp., Formal Op. 512, at 8–9 (2024).

[65] Expert fees comprise a considerable portion of litigation expenses, with expert fees ranging, on average, from $200 to $1,500 per hour. LITILI Grp., The Financial Implications of Expert Witnessing: Unraveling Earnings, Expenses, and Key Considerations (Dec. 4, 2024), https://litiligroup.com/the-financial-implications-of-expert-witnessing-unraveling-earnings-expenses-and-key-considerations/ [https://perma.cc/34L7-ALCP].

[66] See Restatement (Second) of Contracts § 339 cmt. a (Am. L. Inst. 2024).

[67] The guidance does not directly address a lawyer’s ethical responsibilities when working with experts who use generative AI. See generally ABA Comm. On Ethics & Pro. Resp., Formal Op. 512 (2024).

[68] Coffey v. Healthtrust, Inc., 1 F.3d 1101, 1104 (10th Cir. 1993) (“There would seem to be no problem for the attorney to rely on the expert’s opinion as the basis of his client’s position. As long as reliance is reasonable under the circumstances, the court must allow parties and their attorneys to rely on their experts without fear of punishment.”).

[69] Kohls v. Ellison, No.24-cv-3754, 2025 WL 66514, at *3 (D. Minn. Jan. 10, 2025) (identifying the offending expert as one with expertise in generative AI and misinformation).

[70] Fed. R. Civ P. 11. For example, a lawyer might be called upon to verify calculations, check factual statements that form the basis for expert’s opinions, or research the subject matter such that they can make judgments about whether an expert opinion was reasonable.

[71] Cimphony, a legal AI company, already offers “AI-Powered Legal Citation Analysis.” See AI-Powered Legal Citation Analysis: 2024 Guide, Cimphony, https://www.cimphony.ai/insights/ai-powered-legal-citation-analysis-2024-guide [https://perma.cc/KAH6-4BKT]. While it does not currently offer a means of checking citations that would likely satisfy a lawyer’s ethical and professional obligations, the technology appears to be moving in that direction. Id.

[72] See ABA Comm. On Ethics & Pro. Resp., Formal Op. 512, at 15 (2024).

[73] Id.