Relativity Home logo

Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

What Legal Leaders Should Know About Shadow AI

Sam Bock
What Legal Leaders Should Know About Shadow AI Icon - Relativity Blog

What’s dwelling in the dark corners of your tech stack? Because, as CIOs and other business leaders have long learned, what you don’t know can hurt you. For them, shadow IT is nothing new. It’s no less frustrating with time, though.

In fact, it’s been a persistent thorn in legal leaders’ sides for more than a decade—since at least the early 2010s, when employees’ unsanctioned use of cloud storage tools like Dropbox sent files outside approved systems and began complicating data security, privacy, and discovery questions.

Over time, shadow IT slithered well beyond file storage. Today, messaging platforms, productivity apps, collaboration tools, and the complexities of using personal devices for work all proliferate at enterprises of pretty much every industry and scale. Users aren’t always bad actors; often, they’re just trying to work more efficiently and balance their lives productively. Still, IT, legal, compliance, and security teams, entrusted with the safety of their teams’ and clients’ most sensitive data, are left on high alert.

And lately, businesses are seeing the next evolution of that challenge: shadow AI.

Whether leaders have noticed it or not, shadow AI is here. Gartner predicted last year that more than 40 percent of businesses will suffer security or compliance incidents due to the use of shadow AI by 2030. Already, 69 percent of cybersecurity leaders surveyed reported having suspicions or evidence of employees using prohibited, public generative AI tools for work.

What are legal leaders to do?

What Is Shadow AI?

Shadow AI refers to the use of artificial intelligence tools—including generative AI chatbots like ChatGPT, Google Gemini, Grok, and similar technologies—by employees without organizational approval, visibility, or governance. These tools are often used outside sanctioned environments and without the awareness of core stakeholders such as IT, legal, compliance, or security teams.

In most cases, employees are not trying to undermine their organizations.

Quite the opposite, actually. They hear the hype about generative AI, and they’re eager to be more productive. They’re conducting research, drafting communications, summarizing documents, brainstorming ideas, or simply trying to understand how AI might fit into their work.

Every day, they’re inundated with examples of how AI can accelerate workflows and help professionals of every ilk stand out from their peers. Why, they might ask, shouldn’t they try it for themselves?

The result is that employees across industries are, right this minute, experimenting, learning, and optimizing—sometimes thoughtfully, sometimes recklessly, sometimes quietly.

Alas, even well-intentioned use can introduce risk. And in the context of legal data intelligence work, including e-discovery, that risk can be significant.

Lacking clear guidance or sound AI fluency directives, team members may try to figure things out for themselves. Indeed, this disconnect between individual AI adoption and organizational readiness is already visible. Last year, 46 percent of lawyers reported actively using AI, while only 32 percent of firms said they provided AI-powered tools to staff.

It just goes to show that, when employees don’t have access to approved tools or clear guidance on how to move forward, they often fill the gap themselves.

The Legal and Compliance Risks of Shadow AI

For legal leaders, the key risks of shadow AI will feel familiar.

  • Data privacy and confidentiality are at the top of the list. Feeding sensitive or regulated information into unsanctioned AI tools may expose that data to unknown retention protocols, third-party training processes, and other vulnerabilities.
  • Regulatory compliance is another major concern. Data protection laws, emerging AI regulations, and industry-specific requirements all raise questions about how AI systems are used, what data they touch, and how outputs are validated.
  • Then there are privilege and intellectual property risks. What happens when privileged communications or proprietary information are entered into a public-facing AI tool? Could that compromise privilege or weaken IP protections? These are real questions increasingly raised by courts, regulators, and clients alike.
  • Contractual obligations add another layer of complexity. Many organizations have explicit commitments to customers or partners regarding how data is handled, stored, and shared. Shadow AI use may unintentionally violate those commitments.
  • And, of course, poor-quality outputs present significant reputational and matter-specific risks for legal teams. Humans have cognitive biases that predispose us to putting an outsized amount of trust into the answers AI provides. Unsanctioned AI usage and a dearth of AI education increase the risk that employees will skip essential quality-control or selectivity considerations when choosing and using new tools.

There is an indisputable human accountability inherent to the legal profession; no matter what tools and workflows are used on a matter, the people on that matter are responsible for the outcomes. This helps explain why enthusiasm for AI often outpaces formal adoption. In short, legal teams are right to be cautious.

Still, while caution constitutes a smart mindset, it isn’t a strategy on its own.

The good news? The technology and data governance frameworks you already have in place make an excellent starting place for AI policy building. Even if these tools feel new, the muscle memory already exists to help you support and implement them responsibly.

Building a Culture of Trust (Instead of Fear)

Most employees want to use AI responsibly, and many don’t even realize that what they’re doing is unsanctioned. They don’t need legal teams to be cheerleaders for every new tool, nor do they need blanket prohibition. They just need guidance—and perhaps a little trust that legal leadership understands both the risks and the potential of AI—so they feel equipped to do their best work as safely as possible.

To that end, creating effective AI policies requires engagement across your organization: listening to real use cases, evaluating tools thoughtfully, and understanding where and how various teams’ work actually happens. Clearly defining what “trustworthy AI” means in your organization—and why certain tools are approved and others aren’t—helps employees develop their own critical lens.

Plus, good, sound AI policies provide protection for legal teams in the long term. Even if some unsanctioned use continues, documented guidance demonstrates good-faith governance, risk awareness, and proactive oversight from leadership.

This sounds like a big undertaking, and that’s true—there’s a lot to keep in mind. But the payoff extends beyond the four proverbial walls of your organization. Clients are increasingly asking hard questions about AI use, data protection, and risk management. Legal leaders who are informed and prepared to answer those questions quickly and confidently make stronger impressions and build trust faster.

You can’t build that trust by pretending AI doesn’t exist. But engaging with it thoughtfully? Showing some enthusiasm tempered by due caution? That’s the ticket.

How to Begin Addressing Shadow AI Now

Now, rest assured: you don’t need a perfect AI strategy yesterday to take meaningful action today. You just have to start moving forward. Focus on building the momentum that will help you find the right track and get started.

  • Begin by collaborating closely with IT and security teams. Ensure interim safeguards are in place while longer-term governance frameworks are developed. Visibility and outreach matter a lot here.
  • Work with HR and internal communications to clearly articulate expectations and responsibilities. Employees should know that policies exist or are being built. Practically speaking, they need to know what’s allowed, what isn’t, and where to go with questions (without fear of punishment).
  • Most importantly, talk to your people. Your shadow AI users are often your most valuable source of insight. Ask what tools they’re using or want to use, why, what problems they’re trying to solve—all the good stuff. Their enthusiasm and frustration can help inform smarter policy decisions.

Ultimately, addressing shadow AI should not be an exercise in control for its own sake—and you can’t expect to shut it all down overnight.

Focus instead on how you can enable responsible innovation, protect your organization and your customers, and empower people to work better (and maybe enjoy the work a little more, while they’re at it).

Graphics for this article were created by Caroline Patterson.

Agentic AI Toolkit for Legal Professionals

Sam Bock is a member of the marketing team at Relativity, and serves as editor of The Relativity Blog.

The latest insights, trends, and spotlights – directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.