Company founders, attorneys and other professionals working in the legal tech space share their journeys into the industry, challenges they face when working with law firms and legal departments, and common misconceptions about technology in the legal industry.
I never planned to end up here, but in retrospect, every step was preparation for it.
My career has been a series of translations. I was born and raised in Paris, earned a civil law degree, and spent years in cross-border M&A in China before pursuing an LLM at UC Berkeley. The transition from transactional work to litigation came later, but when it did, I found myself working on complex federal court cases and Delaware Chancery Court litigation at Robbins Geller.
Transactional work and litigation are fundamentally different beasts — the prose, the pace, the rules, the pitfalls, the etiquette. Virtually everything.
Litigation begins from opposition. You're working across millions of documents, depositions, motions, and evolving strategy as case law shifts. There's a third party, too: the court. And no AI-native solution existed for this space. Most tools were bolted onto legacy platforms or designed for transactional workflows that don't fit the work litigators actually do.
Every room has a different relationship with AI. Some see it as an existential threat. Others view it as an opportunity they can't afford to miss. Both perspectives are real, and both shape how firms adopt new technology.
Many attorneys are experimenting with general-purpose AI tools without institutional guidance or governance. That creates risk. Litigators are trained to be cautious and suspicious — and those are professional virtues. Everything they produce gets scrutinized by opposing counsel, the court, and partners on review. That scrutiny should extend to their tools.
Knowledge management professionals and litigation support specialists are often tasked with finding, vetting, and rolling out new tools — usually without dedicated budget or buy-in from the partnership. The most successful implementations I've seen treat these professionals as genuine partners in the process, not order-takers.
First misconception: AI is arcane. You don't need to be an automotive engineer to understand what a car does. The same applies to AI tools. A litigator needs to understand what the system does, how it processes information, and where it might fail — but not necessarily how the underlying algorithm works.
Second: once you buy the tool, the problem is solved. Technology is a tool, not a replacement for judgment. It won't solve your problems if you don't have a strategy for using it, governance around its deployment, or clarity on what you're trying to achieve.
There's also a persistent expectation that AI works like a search engine — you type something in and get an answer. Litigation doesn't work that way. Litigation starts with analyzing documents and ends with drafting work product. Everything in between is building context. The quality of the output is tied directly to the quality of the context the system has about the case.
The biggest hurdle is uncertainty. Lawyers are risk averse. When the rules aren't clear, the default is "no."
ABA Formal Opinion 512, issued in 2024, established baseline ethical obligations: competence, confidentiality, communication, candor, supervisory responsibilities, and reasonable billing. But the rules are still open to interpretation, and they're evolving state by state. California has issued detailed guidance. Some judges now require explicit AI disclosure.
At Advocacy, our position is that rules shouldn't restrict innovation but create predictable frameworks. The fundamental obligation doesn't change because the tool changed. What changes is how lawyers verify the output.
Lawyers and providers need better ways to validate AI-generated research and analysis — the equivalent of Shepardizing, adapted for how work is actually being done now.
That's not a problem unique to AI. It's a problem we've been solving in different forms for centuries. The medium changes; the obligation remains.
LexisNexis Intelligize stands out. It makes the entire EDGAR database searchable, benchmarks disclosures, and tracks SEC comment letters in ways that weren't possible before. It's a tool built by people who clearly understand their user's work. That's what matters.
The opinions expressed herein are the author's own and do not necessarily reflect the views of Law360 or LexisNexis Legal & Professional. This article is provided for informational purposes only and should not be construed as legal advice.
Discover how litigation teams are using AI-native technology to work smarter on document review, research, and analysis.
Request a Demo