Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

Let's Talk About Regulating Generative AI: ACC Australia's In-house Legal National Conference.

Phoebe Cracknell

Relativity recently partnered with the Association of Corporate Counsel (ACC) at their annual In-House Legal National Conference in Canberra, Australia. This forward-thinking event serves as a vital hub for in-house legal professionals, facilitating engaging conversations around timely topics—like generative AI—and emerging technology forums.

A primary topic of discussion at this year’s event concentrated on AI capabilities and how this technology is poised to transform the legal industry. As Keith Morris, a Relativity senior account executive for our, government customers, pointed out during the conference: “in-house counsel’s objective is to do more with less—using AI as an enabler but not as a substitute for the human touch.”

For our team at Relativity, keeping a pulse on the current and future concerns, insights, and perspectives of in-house legal professionals is crucial in comprehending the dynamics of the Australian legal industry and how we can play a pivotal role in moving the technology needle forward. We’re excited to be a part of this community, which is always evolving to keep up with the pace of change in our industry.

Here’s a recap of what our team learned onsite.

Gaining Unique Perspective

This year, ACC's session and keynote lineup were notably impressive. One standout session—presented by Relativity One Silver Partner FTI Consulting and titled “How Data Privacy and Cybersecurity Intersect for In-house Counsel”—placed a spotlight on future privacy laws. The discussion suggested that in the upcoming year, significant amendments to Australia's Privacy Act are likely to materialise, aligning with the evolving cyber and privacy landscape.

John Wallace, Relativity’s government adviser and a previous chief data officer for law enforcement agencies in Australia and overseas, emphasised for attendees that, "While nothing is certain, transparency and enhanced accountability on organisations in relation to their cyber resilience and data protection programs appear to be the top priorities. Privacy regulators could be granted greater enforcement powers with these, which could mean harsher penalties for breaches."  

The University of Sydney's session on "A Practical Roadmap for Generative AI" provided a comprehensive overview of AI's role in governance processes and explored legal office use cases. This session effectively conveyed that integrating AI into daily legal practice is not a question of "if," but rather a matter of "when" and "how." Emphasising this message indicates the critical need for futureproofing and effective regulation, especially in the context of AI, with a notable focus on generative AI.

John Wallace commented: “There is much we can learn from the legal profession, in terms of the application of technology, generative AI, and the guardrails needed to govern it to ensure it is used appropriately and can be trusted. It’s a space we should continue to watch.”

Relativity's Take on Regulating Generative AI

Relativity hosted a roundtable session, led by John Wallace and prominent industry figures Schellie-Jayne (SJ) Price, partner at Stirling & Rose; Lee Hickin, Asia AI policy and governance lead at Microsoft; and Mark Alexander, EGM general counsel treasury, corporate and technology, at Commonwealth Bank. The focus of the discussion centered around "Regulating Generative AI,” a critical topic deserving of multiple expert perspectives.

The remarkable advancements in artificial intelligence, particularly in the realm of generative AI, have ushered in a new era of possibilities for legal teams, reshaping embedded processes. However, this transformation brings with it a host of questions like: how do we regulate generative AI and what are the impacts?

This immersive conversation delved into the most pressing questions surrounding AI regulation, offering real-world examples from the panel members, unique use cases pertinent to Australian law, and the evolving role of human intelligence.

Responsible use of generative AI proved an enduring point of discussion. Here were our top three takeaways from the panel:

  • Guardrails for AI: It's crucial to ensure that AI systems are not discriminatory. To achieve this, there should be mechanisms or "off-ramps" in AI systems that allow for human intervention to maintain ethical outcomes.
  • Trust in Generative AI: One attendee commented: "Remember AI is a tool, it isn’t taking anyone’s role. We don’t delegate the moral responsibility to AI; we should not trust it to that extent." Additionally, Schellie-Jayne Price encouraged legal counsel to "Think about this in the full systematic way. The system is more than just AI. It also involves humans."
  • Ethical Accountability: Big businesses are taking the initiative to develop their own ethical guidelines. Mark Alexander from Commonwealth Bank emphasized that it is crucial to “continue keeping tabs on the red flags while awaiting new regulations. In the interim, we have developed our rules for guiding how we use AI.”

Expanding on the session, I asked John Wallace to weigh in with a few final questions to share with all of you. Here’s what we discussed:

Phoebe: How do we regulate Generative AI? Are the existing regulations keeping up? 

John: This is a big question. I think it's still very early days of really understanding how we can safely use generative AI. ChatGPT has whet our appetites for what a game changer it could be; that said, we are also seeing some 'red flags’ reminding us not to completely trust generative AI without some assurance and quality control. In terms of regulations keeping up, we have a range of cybersecurity and data protection legislation and policy that already provide effective guardrails. Also, in the absence of specific AI regulation, I think the principles-based approach being adopted by many organisations, including incorporating AI governance as part of their enterprise risk management frameworks, provides effective fit-for-purpose control and risk mitigation. 

What rules may be coming into play?

Governments around the world all seem to be looking closely at this. The Australian Government’s Digital Transformation Agency has recently established an AI Taskforce which is working on whole-of-government AI application, policies, standards, and guidance. This work will help Australian Government agencies to engage with and deploy AI in a way that is safe, ethical, and responsible. Generative AI brings into focus the need for good quality data, and data governance to ensure the application, results, and outputs are credible and can be trusted.

How can lawyers help identify where AI adds value versus where it can be unrealistic?

Many professionals in the legal industry were early adopters and recognized the power of AI and machine learning using predictive coding and technology-assisted review back in 2014-2015. For this reason, data-intensive industries, like the legal profession, have more exposure to the technology and understand where AI can be optimised and where it may be unrealistic or can’t be trusted. The important thing is to understand how the technology works and ensure transparency into the outcomes, so you can experiment while also making a defensible call on accuracy and risk.

What is the role of human intelligence in AI?

AI is not fail-safe; over-reliance is one of its biggest risks. Behind AI and every data-informed decision, is a person who needs to be accountable for assuring the quality of the data and AI algorithm.

What are the next steps for regulation?

It’s an exciting time for the legal profession. Everyone should continue to engage with emerging technology. AI especially is making many aspects of our lives easier, more interesting, and fun—for example, our own personal information management, shopping choices, and music choices.  Generative AI will be the same; ChatGPT is creating a lot of interest in generative AI technology and showing us the power of what it can accomplish. That said, we should not be rushing into applying it before ensuring it can be trusted, and is legal, ethical, and safe—especially where the result may be informing decisions that could have major implications for individuals or organisations.  

Continue the Conversation

Stay informed and keep the AI conversation going. Check out our General Counsel Guide to AI to learn more about how in-house counsel can understand and responsibly apply AI to their work.

Graphics for this article were created by Sarah Vachlon.

The General Counsel's Guide to AI

Phoebe Cracknell is a member of Relativity's marketing team in Australia.