Florida and New Jersey ponder responsible AI use, consider guidelines tailored to rules of professional conduct

What responsibilities do attorneys have when it comes to artificial intelligence? We’ve already asked whether there is Room for AI on Your Legal Team. Spoiler alert—the answer is a resounding yes. But we also recognized that there’s more to the question than finding comfort and inspiration through legal technology. You may find the technology intriguing, but the discourse surrounding artificial intelligence, particularly generative AI, comes with its own set of cautionary tales. Considering its steady growth and influence, completely avoiding AI is both a poor business proposition and an unrealistic hill to die on. But the other extreme can be just as costly. Having complete faith that generative AI will competently do your job for you can also have serious consequences.

Generative artificial intelligence requires a responsible balancing act, weighing the benefits of AI against the risks posed by technology without proper safeguards. States throughout the country continue to wrestle with the idea of responsible artificial intelligence. Two states in particular, Florida and New Jersey, have recently broached the sensitive intersection between legal technology and appropriate care by tying the responsible use of artificial intelligence to rules of professional conduct.

Cautionary tales highlight need for responsible use of artificial intelligence

You may have read horror stories of people relying on AI hallucinations to their own detriment. A scary thought for sure. However, the decisions referenced in this blog post call for attorneys to use AI responsibly, not to avoid it altogether. The petition before the Florida Supreme Court (discussed later in this blog post) raises two instances highlighting attorneys’ overreliance on generative AI. 

In Mata v. Avianca, a case out of the Southern District of New York, the court issued Rule 11 sanctions to two attorneys and, by extension, their law firm for submitting a brief created by ChatGPT that contained fake case law. The judge overseeing this matter, District Judge P. Kevin Castel, wrote the following regarding an attorney’s role with respect to artificial intelligence, referencing Rule 11 of the Federal Rules of Civil Procedure:

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. Rule 11, Fed. R. Civ. P.”

Man in suit holding a tablet with hologram projected above it. He is reaching out and touching an image representation of AI.An important distinction to consider in this case is the fact that the court imposed sanctions, not due to the mere use of artificial intelligence, but because it determined that Respondents had engaged in “acts of conscious avoidance and false and misleading statements to the Court.” In other words, the court issued sanctions upon a finding of bad faith from Respondents who failed to admit to and correct the error at issue in a timely manner.

A separate matter, People v. Crabill, decided by the Colorado Supreme Court, Office of the Presiding Disciplinary Judge, also highlighted an attorney’s failure to verify information received via generative AI. This was another case where the attorney in question submitted a motion without verifying case law he found through ChatGPT. As with Mata, what concerned the court was the attorney’s failure to properly utilize AI, not the AI itself. The disciplinary judge noted the following: 

“Crabill did not read the cases he found through ChatGPT or otherwise attempt to verify that the citations were accurate. In May 2023, Crabill filed the motion with the presiding court. Before a hearing on the motion, Crabill discovered that the cases from ChatGPT were either incorrect or fictitious. But Crabill did not alert the court to the sham cases at the hearing. Nor did he withdraw the motion. When the judge expressed concerns about the accuracy of the cases, Crabill falsely attributed the mistakes to a legal intern.”

The lesson in all of this is that attorneys neither shirk their professional responsibilities, nor neglect their role as “gatekeepers” when utilizing artificial intelligence in legal practice. It’s an important lesson that states like Florida and New Jersey are seeking to codify.

Florida proposal cautions attorneys to consider the benefits and risks of generative AI in their legal practices

A current Florida proposal now focuses on the decisions in Mata and Crabill to support a call for greater attention to the effects of generative artificial intelligence in legal practice. If successful, legal practitioners would be expected to have greater awareness of generative artificial intelligence pursuant to Chapter 4 of Florida’s Rules of Professional Conduct.

The proposal calls for Florida lawyers to include generative artificial intelligence as part of their understanding of the benefits and risks associated with the use of technology. This principle is designed to impact areas of a lawyer’s professional conduct, including matters of competency, responsibilities of partners, managers, and supervisory lawyers, confidentiality, and responsibilities of nonlawyer assistants.

Specifically, the proposal highlights both Mata and Crabill to stress the importance to use generative artificial intelligence responsibly:

“Generative artificial intelligence is becoming more widespread. Lawyers have improperly used generative AI to their detriment. For example, a lawyer has been sanctioned in New York for filing a legal document generated by AI (ChatGPT) that included citations that were made up by the generative AI application. . . A Colorado lawyer was recently suspended for 90 days for using ChatGPT in preparing a motion to set aside judgment without checking any of the citations, later determined that some citations were fictitious but did not alert the court, and blamed a legal intern when the court inquired about the fictitious citations . . . The rules themselves are broad enough principles to address AI, but commentary will alert Florida lawyers to their responsibilities regarding AI.”

Such proposals show that courts expect attorneys to maintain an active role in the legal process, despite the added convenience that generative artificial intelligence can provide. 

New Jersey issues preliminary guidelines for attorneys using artificial intelligence

New Jersey is another state to push for responsible use of artificial intelligence in recent months, issuing its Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers. The guidelines align responsible use of artificial intelligence with existing rules of professional conduct in the areas of accuracy and truthfulness; honesty, candor, and communication; confidentiality; misconduct; and oversight. These guidelines call for attorneys to balance use of artificial intelligence with ethical considerations:

“The ongoing integration of AI into other technologies suggests that its use soon will be unavoidable, including for lawyers. While AI potentially has many benefits, it also presents ethical concerns. For instance, AI can ‘hallucinate’ and generate convincing, but false, information. These circumstances necessitate interim guidance on the ethical use of AI, with the understanding that more detailed guidelines can be developed as we learn more about its capacities, limits, and risks.” 

The guidelines further state that, “AI tools must be employed with the same commitment to diligence, confidentiality, honesty, and client advocacy as traditional methods of legal practice.” The state has since issued an anonymous survey to New Jersey attorneys “to better understand the knowledge, perception, and use of artificial intelligence, specifically generative artificial intelligence, within the New Jersey legal profession.”

A legal technology that values the benefits and responsibilities of AI

Technology has evolved. The legal profession has a responsibility to engage in, not avoid, technology. But engagement speaks to more than mere use of artificial intelligence. Engagement calls for active participation and understanding of the changing landscape and how it affects your client. Just as attorneys are expected to evolve, so too should their legal tools. You deserve legal technology that not only values the benefits of artificial intelligence, but also understands the responsibilities that come with it.

In the world of deposition reporting, legal technology is driving the industry in new, exciting directions. The advent of COVID-19 forced legal professionals to seek newer, more innovative means to do business. Remote depositions, once an exception to the rule, are now much more common in daily practice. Attorneys are no longer restricted to stenographic means to conduct depositions and create truthful, accurate transcripts of witness testimony. Newer approaches such as electronic recording and speech-to-text technology provide alternative, non-stenographic paths to depositions. The technology is there. What can your deposition service do with it?

Readback is a deposition service that both appreciates new technology and understands that human interaction and professionalism remain essential pieces to the equation. Rather than rely exclusively on artificial intelligence, Readback utilizes its very own Multi-Intelligence Service Team (“MIST”) to provide your deposition experience. The MIST consists of Readback’s patented speech-to-text technology creating the record alongside a human Guardian to conduct the deposition experience, and a team of human transcribers to verify the truthfulness and accuracy of the record as it’s being produced. Readback is the first to offer Active Reporting, a flagship category of service that offers premium deliverables at flat rates. With Active Reporting, Readback provides certified transcripts in one day, rough drafts in one hour, and near-time text during the proceeding.

Readback not only understands that artificial intelligence offers great opportunity for legal professionals, it also appreciates that responsible use of artificial intelligence must be safeguarded by human direction. Visit Readback’s Frequently Asked Questions page to learn more about this game-changing service for forward-thinking attorneys and their clients.

* Disclaimer:  Readback is neither a law firm nor a substitution for legal advice. This post should not be taken as legal opinion or advice.

  • Jamal Lacy, Juris Doctor

    Jamal Lacy serves as the law clerk to InfraWare, Inc., a tech-enabled parent company to Readback. In addition to content creation, Mr. Lacy provides legal research and analysis with particular focus on matters of contract, civil procedure, regulatory compliance, and legislative policy. Mr. Lacy received his Bachelor of Arts in Political Science with departmental honors from Trinity College in Hartford, Connecticut, and his Juris Doctor degree from Suffolk University Law School in Boston, Massachusetts.

Tags

active reporting, AI, court reporter, Court Reporting, deposition, legal news, legal tech, Legal Technology, News, remote deposition, remote depositions, Tech

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Previous Post
Devising Depositions: Readback’s Meet and Confer for April 2024
Next Post
Proposals and Public Comment: Readback’s Policy Review for July 2024
keyboard_arrow_up