One of the sharpest lines delivered at LegalWeek 2026 came during a session on AI Acceptable Use policies: "If you can't monitor it, you can't enforce it. If you can't enforce it, you don't have a policy—you have a suggestion." That framing captured a challenge that most law firms and legal departments are now grappling with: the distance between what their AI policies say and what actually happens in practice.

The Maturity Roadmap

The session presented a Governance Maturity Roadmap that progresses through five stages—from outright AI bans to enterprise-level governance—and identified five critical enforcement controls: defining an approved AI ecosystem, implementing monitoring and logging, establishing AI tool intake processes, training staff on real-world scenarios, and creating clear escalation pathways. Each of these sounds straightforward on paper. In practice, most organizations have implemented one or two at best.

Five Critical Enforcement Controls distinguish mature governance from aspirational policy:

1. Define an approved AI ecosystem 2. Implement monitoring and logging 3. Establish AI tool intake processes 4. Train staff on real-world scenarios 5. Create clear escalation pathways

Organizations with three or more controls have 3.2x better compliance outcomes.

Common Violations to Watch For

The enforcement conversation also surfaced five key violations to watch for: uploading privileged documents into public models, AI drafting filings without review, shadow AI usage outside firm-approved tools, AI-generated client advice delivered without verification, and "work slop"—the subtle downstream effect of AI-generated work that shifts the verification burden to others without their knowledge. That last category is especially insidious because it's hard to detect and easy to rationalize.

"Work slop" is the hidden cost of uncontrolled AI use. Someone generates a draft. Someone else inherits it. No one knows it's AI-generated. The liability follows.

A separate session added urgency to the enforcement question by asking a deliberately provocative question: if you're not using AI, are you committing malpractice? The panel examined the intersection of the duty of technological competence, professional responsibility requirements, and AI's rapidly expanding capabilities. The implication was clear: inaction is no longer a safe harbor. Firms that refuse to engage with AI may face the same scrutiny as firms that use it irresponsibly.

Incident Response Protocol

One session outlined a practical incident response protocol for AI-related violations: investigate, preserve evidence, assess exposure, determine disclosure obligations, and integrate lessons into governance. This kind of operational infrastructure is exactly what separates mature governance programs from aspirational ones.

The protocol emphasizes early detection and swift response. When violations are discovered quickly, the exposure window narrows. When they're discovered after discovery requests, the complications multiply. The firms that build enforcement infrastructure before they need it have a structural advantage over the ones trying to contain an incident.

The Integration Problem

The tension between innovation, risk, speed, control, automation, and accountability ran throughout these sessions. There are no easy answers, but the direction is clear: governance must be embedded in the tools themselves, not layered on top as an afterthought. Platforms designed for high-stakes legal work need to build enforcement into their architecture—through audit trails, source attribution, data isolation, and transparent reasoning.

When governance is a feature of the system rather than a policy document sitting on a shelf, enforcement becomes structural rather than aspirational. An audit trail isn't created because policy requires it; it's created because the architecture demands it. AI outputs aren't traceable because lawyers hope they will be; they're traceable because the system records every step.

The Competitive Reality

The firms that are moving fastest on AI adoption aren't the ones with the most permissive policies. They're the ones with the most enforceable ones. Why? Because enforceable governance reduces risk, which reduces insurance costs, which makes AI investment defensible to partners and clients. An AI tool that creates an auditable trail of its reasoning is more trustworthy than one that generates a black-box output, no matter how impressive the output might be.

The message from LegalWeek was that 2026 is the inflection point. Firms that invest in enforcement infrastructure now will operate faster and with greater confidence. Firms that treat enforcement as a post-implementation afterthought will spend years cleaning up the mess.

If you can't enforce it, you don't have a policy. And if you don't have a policy you can enforce, you have exposure.


This article draws on reporting from LegalWeek 2026, held March 9–12, 2026 in New York City. The views expressed are those of Advocacy.