Trust is an essential ingredient in human-computer interaction, even more so when the output is something that could have a substantive impact on lawyer work product. Not surprisingly, while awareness of AI's potential in legal settings is high, actual adoption and trust lag significantly behind. According to the LexisNexis Investing in Legal Innovation Survey, while 37% of senior lawyers reported their firms using AI tools, only 25% trusted the technology to handle legal work.
This trust gap creates a significant barrier to adoption. In our work with legal teams implementing AI, we've consistently found that initial excitement gives way to skepticism when tools don't immediately meet expectations.
Legal professionals operate in a world of precision. Contracts must be exact, analysis must be thorough, and advice must be reliable. Yet generative AI systems are probabilistic in nature, creating an inherent tension—lawyers want certainty from tools that are unable to provide them with that. They can also be unforgiving after a negative experience because their time is precious, and they don’t want to have to spend time learning systems that don’t immediately provide benefit.
As one legal operations leader told me: "I tried it, I didn't like the outcome, it was too hard to log into a new platform. So then I stopped." This common experience highlights how quickly trust can be broken, and how difficult it is to rebuild.
Based on our extensive research with legal teams, trust in AI systems operates along three key dimensions:
Lawyers need to know the system's outputs are reliable. This goes beyond superficial checks to deeper concerns about hallucinations, source citations, and reliability.
Unlike traditional software, which offers predictable outcomes from set inputs, AI systems introduce variability—and sometimes, hallucinations. While these aren’t necessarily “bugs,” they challenge the legal mindset that expects consistent, audit-ready outputs.
Transparency mechanisms that can help build trust include:
This transparency builds trust by enabling attorneys to verify recommendations in real-time without disrupting their workflow, creating the confidence needed for sustained adoption.
Acquiring a general-purpose tool for your organization is unlikely to go far unless it is tailored to specific needs and use cases. Different practice areas have different needs—a litigator uses AI differently than a transactional lawyer, and procurement specialists have different requirements than IP attorneys. This doesn’t mean you should be overrun with point solutions, but rather the solutions you have should be tailored to the needs of the user type. This will also allow you to bring more context into a system, resulting in generally more satisfying outputs.
The lack of user segmenting may explain why, despite significant investment (25.3% of organizations have spent between $100,000-$500,000 on legal AI tools), many legal professionals we have interviewed remain unsatisfied with commercial solutions.
Legal professionals must see a clear return on their investment of time and attention. In a profession where time is literally money, tools that create more work than they save are quickly abandoned.
This value proposition is undermined when:
As AI systems improve, we face a paradoxical challenge: highly accurate systems can create a different kind of trust problem. Legal professionals may develop over-reliance on AI recommendations, diminishing their critical evaluation.
This creates a tension unique to professional services: how do we design systems that are trusted enough to be used, but not so trusted that professional judgment becomes secondary?
The solution requires what I call "calibrated trust" – where the level of trust matches the actual capabilities of the system. In our UX research with multiple firms, we've found that the most effective implementations include:
By calibrating trust appropriately, we avoid both the skepticism that prevents adoption and the over-reliance that undermines professional value.
Based on our work with legal organizations to date, here are some things to consider in order to build trust:
Many of the barriers to adoption center around trust in the system. Careful design, both of the user experience and of the actual system, can vastly increase user trust and confidence, resulting in better ROI on your team’s technology spend.
While building internal capabilities is essential, many legal departments find that partnering with specialized AI-first service providers can complement and accelerate their AI strategy. These partners bring not only deep expertise in specific legal functions but also pre-built solutions that already incorporate the user-centered design principles discussed here. This hybrid approach can help legal teams overcome resource constraints and respond more quickly to business demands while still maintaining the focus on user experience that drives successful adoption.
Nicole Bradick is Head of Global Innovation at Factor Law and former CEO of Theory and Principle, a legal technology design and development firm. Join her at the CLOC Global Institute on Tuesday, May 6th at 8am for a panel discussion "Beyond AI Implementation: Leveraging UX to Accelerate Adoption".