#LegalEthics Tidbit: Can I rely on AI to draft a remedial AI policy for my firm?

After the Ohio Court of Appeals affirmed of a murder conviction, defense counsel submitted an application to reopen the matter on the grounds that the prosecutor had made improper statements during trial. However, the prosecutor never made these statements. They were created by #AI when defense counsel's paralegal fed the case records into ChatGPT. ... (cont.)

https://lnkd.in/egn7kWUh
#law

... After being notified of the fabrications, defense counsel doubled down and appealed to the Supreme Court based on the same fabricated statements. After the state requested sanctions, defense counsel told the Court he had solved the problem by creating a new AI policy for his firm. The Court was not impressed, noting that “the AI policy proffered by respondent bore the hallmarks of having itself been generated by an AI platform. The policy contained unfilled bracketed placeholders ...(cont.)
... such as “[Insert Date]” where respondent's own firm-specific information should have appeared, inconsistent formatting, typical of AI-generated templates, redundant language, and a scope that precisely mirrored the issues in this case while omitting other critical AI governance considerations. Respondent did not appear to take the minimal step of substituting his firm's actual data where the AI tool had placed brackets indicating customization was required. ... (cont.)
... The proffering of an AI-generated AI policy as a remedial measure in a case involving the submission of AI-generated fabrications to this court is, at best, ironic. It suggests that respondent's engagement with the consequences of his misconduct has been superficial.” The sanction included 2k in attorneys’ fees, referral to disciplinary authorities, and a written apology to the defamed prosecutor.