California Judicial Council examines appropriate role for generative AI in state courts.

California remains an interesting case study for legal technology, demonstrating what could be characterized as an intrigued, yet noncommittal approach since the COVID-19 pandemic. Discourse surrounding the embrace of legal technology tends to focus on 1) the extent to which Californians might trust legal technology, and 2) the speed with which new technology should be adopted, if at all. On one hand, the state adopted remote technology for civil and criminal court proceedings, providing greater flexibility compared to in-person appearances. On the other hand, California resisted taking matters one step further when it rejected efforts to expand the use of digital recordings in court proceedings, a proposal supporters viewed as a cost effective response to the state’s documented court reporter shortage.  

Although critics doubt the accuracy and efficacy of more advanced, nontraditional means of conducting business, the California Judicial Council remains relatively forward-thinking in its approach to legal technology. The Council not only supported legislative bills for remote civil and criminal proceedings, it also advocated for the expansion of digital recordings. Faced with the reality of an overburdened judicial system, the Council has looked toward legal technology as a way to increase efficiency and improve access to justice. The Council has taken a similar position with respect to generative AI, a topic California Supreme Court Chief Justice Patricia Guerrero highlighted in her State of the Judiciary Address. Acknowledging the sense of inevitability and forward progress often associated with artificial intelligence, Chief Justice Guerrero discussed the need to adapt to changing times:

“One goal has particular salience these days. And that is modernization of management and administration. With respect to this goal, some modernization is voluntary, some is disruptive and thrust upon us. And you can guess where AI falls in that spectrum. Either way, change is the inevitable result. I don’t recommend that anyone shout into their phone, ‘Win my case’ or ‘Write my brief.’ But society, government, and, therefore, our court system must address the many issues and questions presented by the developing field of artificial intelligence. We must do this in a careful and deliberative fashion.”

Presentation lists key questions on generative AI

This “careful and deliberative” approach included a recent presentation to the California Judicial Council on generative AI and its impact on the state judiciary. The presentation distinguished traditional AI, described as using rules and algorithms to solve specific tasks, from generative AI, which presenters characterized as allowing for the creation of new content without the need for reprogramming.  

Generally speaking, as the public becomes more versed in legal technology, stakeholders have come to appreciate distinctions in artificial intelligence. For more perspective, consider the following table comparing traditional to generative AI:

Comparison of Traditional AI and Generative AI

Aspect Traditional AI Generative AI
Definition Uses rules and algorithms to solve specific tasks. Creates new content without the need for reprogramming.
Functionality Performs predefined tasks based on programmed rules. Generates new data, text, images, and more.
Applications Fraud detection, recommendation systems, search engines. Content creation, chatbots, AI art, language translation.
Programming Requires extensive programming for specific tasks. Learns patterns and generates outputs based on input data.
Examples Spam filters, chess programs, voice recognition. GPT-4, DALL-E, ChatGPT, DeepArt.

Much of the California Judicial Council’s May 17, 2024 meeting focused on potential risks and benefits of generative AI, outlining a framework to address the subject while incorporating key principles such as transparency, ethics, and due process. The presentation, titled Generative Artificial Intelligence and California’s Judicial Branch, included both administrative and ethical considerations stemming from a list of key questions posed to the Council. Presenter and California Sixth District Judge, Mary J. Greenwood, offered these “foundational questions” to guide the conversation and focus the Council’s attempt to determine an appropriate approach to generative AI. The first question simply asked whether artificial intelligence should even have a place in the judicial branch. The answer perhaps set the cautiously optimistic tone for the rest of the discussion—”Yes, with limitations and safeguards.” 

Other questions left open for further discussion were:  

  • “In what ways can (or should) generative AI be used in California’s judicial branch?”
  • “How can public trust and confidence in the courts be preserved or enhanced?”
  • “If generative AI is used by the branch, how will confidentiality and privacy be maintained?”
  • “How do generative AI and judicial ethics intersect?”

Council revisits cautionary tale to warn against blind faith in generative AI

Double exposure of businessman with fingers interlaced and sitting down with a city in the background.The ability for generative AI to create new content without additional programming invites a certain level of responsibility for those who use it. This requires human engagement to act as a safeguard, a topic of considerable discussion at the Council’s presentation on generative AI. To drive this message home, the Council focused its attention on Mata v. Avianca, a case previously discussed in April’s Proposals and Public Comment blog post, where an attorney and his firm were sanctioned for submitting a brief containing citations to fake case law hallucinated by Chat GPT. It’s an example that demonstrates a responsibility that applies to attorneys and the judges, alike. Applying this lesson to the judiciary, Judge Greenwood clarified that the use of generative AI is no excuse to shirk judicial discretion:

“[F]or all the excitement about it, for all the money that’s being poured into it, billions of dollars, generative AI is a tool. It is not an end. And it is not a substitute for judicial discretion. And it’s not a substitute for due process.”  

Chief Justice accepts recommendations as legislators and council members sound off on generative AI

The generative AI presentation also broached subjects of public trust, judicial ethics, transparency, discriminatory bias, and data privacy. Regarding data privacy, presenters suggested that court users would need to consent to having their information put into an AI training model, a requirement that could be complicated if users aren’t aware that they are engaging with artificial intelligence to begin with. Additionally, since courts routinely deal in vast amounts of private information, presenters suggested establishing security measures for generative AI to protect court work product and the personal information of court participants. Such considerations may prove crucial in keeping pace with technology. As described by Judge Greenwood, the influence and ubiquity of generative AI calls for proactive measures:  

“For one thing, the practitioners who appear in front of us are going to be using it. They’re already starting to teach it in law schools. LexisNexis and Westlaw now have products which use generative AI. They’ve been adopted in some federal circuits, not for drafting purposes, but for research purposes. And so we don’t want to be behind the curve on this if it’s going to be inevitable. What we want to do is to be able to control its direction in a deliberate way in terms of its use within the branch.”  

The conversation then turned toward recommendations and considerations on how generative AI might be used in the judicial branch. Anticipated benefits included increased administrative efficiency, improved access to justice for courtroom participants (particularly self-represented litigants), and a reduction in human bias. At the conclusion of the presentation, Chief Justice Guerrero agreed to move forward on the following proposals:  

  • “Create Artificial Intelligence Task Force to oversee consideration, coordination, and development of branch actions”
  • “Work with the Supreme Court committees on judicial ethics to consider amendments to the Code of Judicial Ethics or otherwise address issues concerning the use of generative AI”
  • “Direct the Center for Judicial Education & Research to promptly begin preparing educational materials and programs on generative AI”

But the conversation concerning generative AI extends well beyond the judicial branch. Debates are also taking place in the state legislature, a point raised by council member and state senator, Thomas Umberg, following the presentation. Having sponsored bills for remote civil and criminal proceedings, Senator Umberg is no stranger to the contentious back-and-forth that often comes with proposing tech-driven legislation. With a host of proposals making their way through legislative committees, Senator Umberg urged council members to provide input to the legislative branch on generative AI: 

One of the challenges is what does the legislature do vis a vis the other branches of government in terms of bias, transparency, privacy, all the other issues that have been raised. . . [T]here were 55 pieces of legislation that had been introduced. . . In the next 45 days we’ll be deciding most of those issues. In the next 104 days we’ll be deciding them absolutely. . . We have some real challenges. But I suppose to underline the time imperative to meeting some of those challenges is that I would invite the Council to look at the legislation that’s pending right now to see how it impacts the courts, where [the legislative branch] should assert itself or whether the courts should basically tell us to back off.”

The Council’s open discussion reflected a cautiously optimistic approach to generative AI, with judges voicing concerns, but ultimately acknowledging that adopting generative AI means saving space for judicial principles to sustain public trust. Echoing earlier sentiments of judicial discretion, Judge Jonathan B. Conklin (Superior Court, Fresno County) brought the conversation back to the matter of due process, stating “[I] don’t know how we put it into the system. But I think what gives the public confidence in what comes out of our branch is the thought that they were heard and due process.” 

Judge Erica R. Yew (Superior Court, Santa Clara County) likened generative AI to yet another tool in the trial process, expert witnesses, to stress the need for proper notice:

“[O]ne of the tools we use are expert witnesses. . . that is a tool that we use to educate ourselves and our jurors. And so I can envision that AI, like the use of experts, would require some notice . . . that there’s notice, disclosure, and an opportunity to question the AI that’s being presented or the expert information that’s being presented. So I think that some of this is something that we know how to do, that we can do this, and we just have to be thoughtful about it.”

A sampling of California Senate proposals on generative AI

On the legislative front, discussions of generative AI have assumed a larger presence in committee hearings at the state capitol. One proposal currently making its way through the California legislature is Senate Bill 896, titled the “Artificial Intelligence Accountability Act.” If adopted, this legislation would require certain agencies and departments “to produce a State of California Benefits and Risk of Generative Artificial Intelligence Report” to examine the most significant and potentially beneficial uses of generative AI throughout the state. If necessary, the bill would also require a joint risk analysis from California’s Director of Emergency Services, Cybersecurity Integration Center, and Threat Assessment Center. As for transparency, Senate Bill 896 would require state agencies and departments to notify people when they are directly communicating with artificial intelligence. 

A man standing in front of a window in a high rise office building. He is looking at his laptop and his reflection is visible in the window.Another California proposal is Senate Bill 970, which focuses on AI deepfakes with evidentiary requirements for the Judicial Council. During an April 9, 2024 hearing before the state senate judiciary committee, the bill’s sponsor, Senator Angelique Ashby, warned that, despite its promise, “the lack of comprehensive legal framework for addressing the challenges posed by AI is troubling.” Senate Bill 970 would mandate the Judicial Council: 

“. . . to review the impact of artificial intelligence on the introduction of evidence in court proceedings and develop any necessary rules of court to assist courts in assessing claims that evidence that is being introduced has been generated by or manipulated by artificial intelligence.” 

The message is clear. Generative AI requires human direction to properly serve the public. It appears the Judicial Council will remain an integral part of the conversation moving forward.

A deposition service that values safeguards for generative AI

The use of artificial intelligence has become a fascinating development in the legal profession. But as attorneys and judges search for proper perspective, the need for a hands-on, common sense approach remains a necessary requirement for serious  engagement in legal technology. Legal professionals should seek service providers that appreciate the need to guide technology rather than blindly trust its results. When it comes to deposition reporting, Readback believes in 

High tech inspired image with circuits, lights and many lines emanating from the outline of a persons face in the style of an AI personal assistant.

combining artificial  intelligence with human safeguards. Readback’s approach to deposition reporting is driven by its Multi-Intelligence Service Team where a human guardian conducts the deposition while a team of human transcribers work alongside patented speech-to-text technology to produce the record. Readback’s guided approach to deposition reporting offers attorneys the opportunity to leverage technology for quick turnaround times at flat rates. Readback’s flagship level of service, Active Reporting, provides certified transcripts in one day, rough drafts in one hour, and access to near-time text during the proceeding.

Judges and attorneys recognize the importance of harnessing generative AI. Your deposition reporting service should share that same sense of urgency. Readback is not currently accepting requests for California jurisdiction cases, but it’s always good to hear what readers have to say. Visit our Frequently Asked Questions page to learn more about this game-changing service and feel free to share your thoughts in the comments section below.

* Disclaimer:  Readback is neither a law firm nor a substitution for legal advice. This post should not be taken as legal opinion or advice.

  • Jamal Lacy, Juris Doctor

    Jamal Lacy serves as the law clerk to InfraWare, Inc., a tech-enabled parent company to Readback. In addition to content creation, Mr. Lacy provides legal research and analysis with particular focus on matters of contract, civil procedure, regulatory compliance, and legislative policy. Mr. Lacy received his Bachelor of Arts in Political Science with departmental honors from Trinity College in Hartford, Connecticut, and his Juris Doctor degree from Suffolk University Law School in Boston, Massachusetts.

Tags

active reporting, AI, california, court reporter, Court Reporting, deposition, generative, Generative AI, legal news, legal tech, Legal Technology, News, remote deposition, remote depositions, Tech

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Previous Post
Proposals and Public Comment: Readback’s Policy Review for April 2024
Next Post
Proposals and Public Comment: How Wyoming’s New Rule Adds Structure and Protocol to Remote Depositions
keyboard_arrow_up