A few days ago, we concluded one of the largest Relativity Fests ever. As legal technology practitioners from around the world gathered in Chicago, it wasn’t surprising that artificial intelligence was a ubiquitous theme for many—from the nuts and bolts of e-discovery to the expanding field of Legal Data Intelligence.
As we’ve done every year since 2016, we’ve tried to capture a few moments that, to paraphrase the great American philosopher, Jeffrey Lebowski, tie the whole thing together. We call it “some” of our favorite commentary because, with almost 2,000 people gathering together to create the magic that is Relativity Fest, it’s impossible to include it all.
Nevertheless, here’s a sample of what makes Fest special. With a focus on AI and some of those perspectives from around the world, we bring you The Best of Relativity Fest: Some of Our Favorite Commentary.
Jonathan Armstrong, Partner, Punter Southall Law, On the GDPR and the EU AI Act:
“Fairness and transparency are the Laurel and Hardy of GDPR. They're core principles and they've led to the majority of fines. And they will be central principles of the EU AI Act, too.”
And on AI and the law:
“AI is an opportunity; lawyers can’t just opt out, particularly in areas like e-discovery. You owe it to your client to work out whether AI could produce better results. So we end up with phone-a-friend system, where the professional duties of lawyers require an understanding of tech, or bringing in the people who do, or disclosing the failure of both of these to client.”
To Jonathan’s point on the opportunity of AI nevertheless demanding the need to understand the tech, Bob Ambrogi, Attorney at Law, and Editor, LawSites, on legal ethics and the misuse of generative AI in Mata v. Avianca, Inc., and other matters:
“Mata v. Avianca was covered in The New York Times, The Washington Post, the legal media, the legal tech media—you would think that would be the end of it, but no. It’s happened over and over and over again. What’s really important this year is legal ethics, particularly the duty of technology competence.”
Meribeth Banaschik, Partner, Forensic & Integrity Services and EMEIA Innovation Leader, EY, on common global themes in AI:
“Put yourself in the role of a lawmaker having to write new laws about AI. You know the terminology is tricky—even just defining the term ‘AI.’ It’s not going to look the same five years from now. Not every country is common law or civil law; so, looking across the globe, you can imagine different countries seek to regulate AI differently. However, you see some topics that emerge no matter where you are: concerns around human rights, data protection, transparency, bias, how to handle hallucinations.”
E.J. Bastien, Sr. Director, Discovery Programs, Microsoft:
“You really should lean in and understand how these things are going to work. Understanding data persists is one thing, but where it exists is different, and it’s sometimes in fragmented pieces—transcripts may live separate from the chat and files shared during meetings. Pulling it all together doesn’t have an easy button.”
Rob Beard, Chief Legal and Global Affairs Officer, Mastercard:
“CLOs are thinking about AI because their bosses and bosses’ bosses (boards) are asking about it. Every single one is being asked how they’re going to use AI to make their legal team better, be more efficient, become cheaper. I like to push back on that last one—it’s about getting better, not cheaper.”
Jerry Bui, Founder & CEO, Right Forensics with advice on launching a career as a forensic examiner working on deepfakes:
“Jump in. The water’s warm. What’s fun about being a digital forensics examiner is you get a chance to think like a criminal—without actually being a criminal.”
Fiona Campbell, Director, Dispute Resolution, and Head of Electronic Disclosure, Fieldfisher, on the fears of robots taking over the law and the deployment of AI resources:
“There will never be a situation where, if you need human interaction and a judge’s direction, legal matters that aren’t straightforward or consented to will be doable without humans. We’ll have a triage system with chatbots and foundation models to help litigants, but the High Court won’t have this approach. Criminal courts face huge backlogs, but AI has not been investing much in this area—just civil courts, where there is money. That seems dissociative.”
Stephen Dooley, Director of Electronic Discovery and Litigation Support, Sullivan & Cromwell:
“We’re seeing an increase in voice, video, and synthetic identify fraud overall—and explosive growth in AI-powered fraud. There are 100 apps out there where you can convert your images into ultimate versions of yourself that look really good, and those are just the commercially available ones.”
Hon. James Francis, Neutral, JAMS, and U.S. Magistrate Judge (Ret.) on a potential access to justice advantage of mediation:
“You are more likely in mediation to get the assistance of the mediator. A court is going to be very reluctant to guide one party or the other for risk of appearing non-neutral. Mediators are also neutral, but they may be more willing to help, at least in terms of the procedures.”
Manfred Gabriel, Partner, Holland & Knight LLP on disclosing the use of generative AI in litigation:
“I don’t think I have an obligation to disclose these tools. On the other hand, would I recommend not disclosing them in a meet and confer? No. I don’t think that would be smart; I take cooperation seriously.”
David Gaston, Chief AI Officer (CAIO) and Lead Technology Counsel, National Labor Relations Board:
“Your agency and/or enterprise is littered with people who are excited about AI. There are champions waiting to be let loose. The empowerment, interest, novelty, opportunity—you can build fast if you want to. Those champions are there throughout your organization and it won’t’ be that hard to find them.”
U.S. Magistrate Judge Allison Goddard (S.D. Cal.) with (educational) advice to litigants and legal teams:
“Be reasonable. Be practical. Be the side that comes with a solution.”
Associate Justice Tanya R. Kennedy, New York State Supreme Court, Appellate Division, First Judicial Department, on the increase in threats to the judiciary:
“A lot of what has taken place is due to the lack of civic education in the schools and the community. I can only speak for myself, but it seems to me that, when judges go out into the community, that’s a form of civic engagement and education. You remove the mystique of a judge, what a judge looks like, and you educate the public on what a judge does and the proper role of a judge.”
U.S. Magistrate Judge William Matthewman (S.D. Fla.) on attacks on the judiciary:
“The problem is that judges are being attacked personally as opposed to their rulings being evaluated and attacked. ... What’s exacerbated the problem is the rise of social media ... now, within seconds, they can send out a threat, and we see it all the time. I do blame a lot of it on social media, but lawyers and bar associations need to step up because we can’t defend ourselves—we can’t get on that blog and respond, ‘Did you actually read the opinion? Did you actually analyze the findings and the rulings?’”
Dr. Victoria McCloud, Associate Member Gatehouse Chambers, and Master of the King’s Bench (Ret.), on judges having to make difficult and unpopular decisions in the United Kingdom:
“I had to make an unpopular decision related to Donald Trump. It was about free speech. Trump was sued for libel, and I had to give a decision in favor of Trump ... without being seen as agreeing with what Donald Trump was saying. ... I think in that case, it just has to be telling it like it is: saying what the law is, saying why you have to dismiss it, and making it clear that you are not intending to mean that you have any view about that or that you agree with one side or the other in any personal sense.”
Andrew Myers, Discovery Counsel, Bayer, on ESI protocols:
“We are at a stage where we’re focused on the wrong part of it. We’re focused on all the steps we’re going to take, instead of whether or when we get there. Meeting the needs of a particular report card isn’t as important of meeting the requirements you’re under obligation to meet.”
Gina O'Neill, Head of eDiscovery, Herbert Smith Freehills, on the importance of AI validation:
“Guidelines for the use of AI in court are generally calling out, ‘validate, validate, validate.’ Using generative AI and LLMs can involve complications like using out of date models, jurisdictional issues, incomplete information. So make sure you validate, validate, validate.”
Hon. Andrew Peck, Senior Counsel, DLA Piper, and U.S. Magistrate Judge (Ret.), on an important rule one will not find in standard texts of the Federal Rules of Civil Procedure:
“That rule is Rule 1.1: Don’t p*ss off the judge.”
Judge Michelle Rick, Michigan Court of Appeals, and President-Elect, National Association of Women Judges (NAWJ) on the justice gap:
“As data from the Legal Services Corporation has indicated, low-income Americans receive little or no legal help for over 90 percent of their significant legal problems.”
U.S. District Judge Xavier Rodriguez (W.D. Tex.) referencing Marbury vs. Madison and an historical perspective on attacks on the judiciary:
“These attacks on the judiciary have been going on since this country was created. ... I agree with my colleagues that the severity of the attacks is very problematic now—part of it’s due to social media, part of it’s the 24-hour news coverage, and I think mental illness, which we haven’t discussed yet, is definitely an issue.”
Joy Heath Rush, CEO, International Legal Technology Association (ILTA), on the renewed and expanded importance of information governance:
“Frankly, you can’t talk AI without talking information governance. The new information governance mindset really came to the fore two years ago with the new ISO standard (ISO 24143:2022) because it really gives you a broader responsibility because it goes beyond PII. It’s not only about personally identifiable information.”
Benjamin Sexton, Vice President of eDiscovery, JND, on negotiations on data sources:
“You have a spectrum of availability: email is generally collected on every case, Teams data often is, OneDrive often, desktops less so. Volatile memory is very rare, as are Google searches in civil litigation. All these things are collected sometimes; some are a lot. Now, where are your chatbot interactions going to fall on that spectrum? I’m curious. ... Google searches aren’t on the table because they’re very expensive to extract. But with these chatbots, it’s a simple export. So the brave new world might be TBD. We’ll have to see how firms are using the data.”
Linda Sheehan, Head of IntelligENS, ENS, on an emerging solution to court backlogs around the world:
“If we talk about ‘robo-courts,’ the investment that needs to be made to make them work is worthwhile; if we can try and put a fresher perspective on making courts more efficient, globally, it’ll be beneficial.”
Stephanie Wilkins, Editor-in-Chief, Legaltech News on the legal ramifications on generative AI prompts:
“Some e-discovery professionals are contemplating a scenario in which prompts may be privileged under the work product doctrine, to the extent they convey attorneys’ thought processes in anticipation of litigation.”
And with thoughts on the prompt privilege issue, Lea Bays, Partner, Robbins Geller Rudman & Dowd:
“It seems suspicious when folks treat [prompt criteria] as a secret. If it’s reflective of the request for production, why is it a secret?”
