Insights

The Trust Journey: Why Some Legal AI Tools Get Used

Nicole Bradick
April 9, 2025

Trust is an essential ingredient in human-computer interaction, even more so when the output is something that could have a substantive impact on lawyer work product. Not surprisingly, while awareness of AI's potential in legal settings is high, actual adoption and trust lag significantly behind. According to the LexisNexis Investing in Legal Innovation Survey, while 37% of senior lawyers reported their firms using AI tools, only 25% trusted the technology to handle legal work.

This trust gap creates a significant barrier to adoption. In our work with legal teams implementing AI, we've consistently found that initial excitement gives way to skepticism when tools don't immediately meet expectations.

Building Trust in Probabilistic Systems

Legal professionals operate in a world of precision. Contracts must be exact, analysis must be thorough, and advice must be reliable. Yet generative AI systems are probabilistic in nature, creating an inherent tension—lawyers want certainty from tools that are unable to provide them with that. They can also be unforgiving after a negative experience because their time is precious, and they don’t want to have to spend time learning systems that don’t immediately provide benefit.

As one legal operations leader told me: "I tried it, I didn't like the outcome, it was too hard to log into a new platform. So then I stopped." This common experience highlights how quickly trust can be broken, and how difficult it is to rebuild.

The Three Dimensions of Trust in Legal AI

Based on our extensive research with legal teams, trust in AI systems operates along three key dimensions:

1. Trust in Accuracy

Lawyers need to know the system's outputs are reliable. This goes beyond superficial checks to deeper concerns about hallucinations, source citations, and reliability.

Unlike traditional software, which offers predictable outcomes from set inputs, AI systems introduce variability—and sometimes, hallucinations. While these aren’t necessarily “bugs,” they challenge the legal mindset that expects consistent, audit-ready outputs.

Transparency mechanisms that can help build trust include:

  • Source provenance and citations
  • Confidence scores
  • Reasoning traces
  • Clear documentation of limitations

This transparency builds trust by enabling attorneys to verify recommendations in real-time without disrupting their workflow, creating the confidence needed for sustained adoption.

2. Trust in Relevance

Acquiring a general-purpose tool for your organization is unlikely to go far unless it is tailored to specific needs and use cases. Different practice areas have different needs—a litigator uses AI differently than a transactional lawyer, and procurement specialists have different requirements than IP attorneys. This doesn’t mean you should be overrun with point solutions, but rather the solutions you have should be tailored to the needs of the user type. This will also allow you to bring more context into a system, resulting in generally more satisfying outputs.

The lack of user segmenting may explain why, despite significant investment (25.3% of organizations have spent between $100,000-$500,000 on legal AI tools), many legal professionals we have interviewed remain unsatisfied with commercial solutions.

3. Trust in Value

Legal professionals must see a clear return on their investment of time and attention. In a profession where time is literally money, tools that create more work than they save are quickly abandoned.

This value proposition is undermined when:

  • Tools require excessive prompt engineering
  • Interfaces are disconnected from existing workflows
  • Learning curves are steep without corresponding benefits
  • Outputs require substantial rework

The Calibrated Trust Challenge

As AI systems improve, we face a paradoxical challenge: highly accurate systems can create a different kind of trust problem. Legal professionals may develop over-reliance on AI recommendations, diminishing their critical evaluation.

This creates a tension unique to professional services: how do we design systems that are trusted enough to be used, but not so trusted that professional judgment becomes secondary?

The solution requires what I call "calibrated trust" – where the level of trust matches the actual capabilities of the system. In our UX research with multiple firms, we've found that the most effective implementations include:

  • Deliberate presentation of alternative interpretations for important analyses
  • System design that presents AI as a collaborative partner rather than an authority
  • Visibility into the AI's reasoning process rather than just its conclusions
  • Depending on the circumstance, confidence indicators that accurately and clearly represent reliability levels

By calibrating trust appropriately, we avoid both the skepticism that prevents adoption and the over-reliance that undermines professional value.

Strategies for Building Trust in Legal AI

Based on our work with legal organizations to date, here are some things to consider in order to build trust:

  1. Start small and build credibility with high-accuracy use cases
  2. Integrate AI where lawyers already work rather than creating new destinations
  3. Provide appropriate levels of transparency based on use case criticality
  4. Invest in training beyond basic functionality to build true confidence
  5. Create feedback loops that allow continuous improvement

The Path Forward

Many of the barriers to adoption center around trust in the system.  Careful design, both of the user experience and of the actual system, can vastly increase user trust and confidence, resulting in better ROI on your team’s technology spend.

While building internal capabilities is essential, many legal departments find that partnering with specialized AI-first service providers can complement and accelerate their AI strategy. These partners bring not only deep expertise in specific legal functions but also pre-built solutions that already incorporate the user-centered design principles discussed here. This hybrid approach can help legal teams overcome resource constraints and respond more quickly to business demands while still maintaining the focus on user experience that drives successful adoption.

Nicole Bradick is Head of Global Innovation at Factor Law and former CEO of Theory and Principle, a legal technology design and development firm. Join her at the CLOC Global Institute on Tuesday, May 6th at 8am for a panel discussion "Beyond AI Implementation: Leveraging UX to Accelerate Adoption".