Insights

Where Legal Tech Actually Lives: Mapping Your Team’s Technology Touchpoints

Nicole Bradick
April 9, 2025

The legal industry is at a crucial inflection point with AI adoption. While most legal departments have AI tools in place, only 12.1% report "leading the way" in GenAI adoption, according to Factor's GenAI in Legal Benchmarking Report. Even more telling: 61.2% of legal departments provide AI access to most or all team members — yet just 18.9% of legal professionals feel very confident using those tools.

This gap between access and actual confidence in use reveals the true challenge: adoption isn’t about availability — it’s about experience. A lack of user confidence is often a design problem. To move from tools being present to being useful, we have to look more closely at where and how legal professionals interact with technology in their daily workflows.

The “Over-Platformification” Problem

Legal professionals don't want another platform. From what we are seeing from user testing and research with lawyers,  one of the biggest barriers to AI adoption is what I call "over-platformification" – the proliferation of separate tools that require legal professionals to log into yet another system, learn another interface, and change how they work.

In a recent workshop we hosted, a partner at an AmLaw 100 firm put it this way: "The ultimate dream is not having a separate AI platform interface—taking advantage of what AI has to offer without having to log into somewhere else."

Even the most powerful AI struggles to gain traction when it’s layered onto the workflow instead of embedded within it. A lack of user confidence is a design problem, and this disconnect between access and utilization has design solutions. 

Meeting Users Where They Are

To drive successful AI adoption, we need to start by understanding that different user types will use the technology you’re offering in different ways, based on the value they receive from it. This requires you to segment your users into different buckets based on their needs (and not on other superficial segmentation points, like practice group or seniority).

Two critical user segments to consider:

  1. Power Users vs. Casual Users: Power users will be more likely to understand how the technology works and how to get what they need from the technology. As a result, they might benefit from advanced features and less guardrails. They may also be sufficiently motivated to access a separate platform, creating an exception to the overplatforming problem. Meanwhile, casual users might need AI that's completely invisible to them or that appears only in specific contexts—for example, when creating certain types of documents.
  2. Task-Specific Users: Segment users by the actual work they're doing. For instance, an intellectual property lawyer who is a casual user might need intense application use, but only during specific periods of activity. This requires a completely different interaction model than someone who uses the technology consistently.

Beyond the Chatbot

While chatbots remain the predominant way to engage with AI -- our GenAI in Legal Benchmarking Report found that 47.5% of legal teams have built internal AI interfaces/chatbots -- they're also the most cognitively challenging interface. It creates significant friction points:

  • Users don't know what they can ask or do
  • Users struggle with how to prompt effectively
  • Users have no clear way to gauge the accuracy of responses

These are all problems that can be designed around, but they represent core challenges with such an open and flexible UI. 

Consider the full spectrum of UI options, from highly flexible (chat interfaces) to highly constrained (point-and-click interfaces). The more constrained the interface, the easier it typically is for users. In many cases, the most effective AI implementation might have no visible interface at all—simply providing the benefit of AI behind a button or process users already use.

Addressing Trust Through Design

Another key barrier to adoption is mistrust. Legal professionals often desire certainty, which is challenging with probabilistic systems. In our research, we’ve found that it’s very common for attorneys to abandon a system altogether if they receive outputs that they feel they can’t trust. 

The key is in setting appropriate user expectations. Smart design decisions can help bridge this gap:

  • Consider whether Gen AI is even the right tool to solve the problem.  If certainty in output is required, Gen AI is likely not the right tool to deploy. 
  • Be transparent about sources, capabilities, and limitations. The touchpoints for this transparency may differ depending on the type of activity and user proficiency. 
  • Determine whether/how to provide confidence scores for the specific use case.

There's an interesting tension here: if a system has poor accuracy, users will abandon it regardless of how present it is in their workflow (think Clippy). However, if a system is highly accurate and ever-present, there's a risk of "autopilot" behavior—blindly accepting AI recommendations without proper review. The ideal space strikes a balance between high accuracy and selective presence, ensuring lawyers remain engaged in critical thinking while benefiting from AI assistance.

Path Forward: User-Centered AI Implementation

As the Factor GenAI in Legal Benchmarking Report reveals, organizations achieving the highest ROI consistently prioritize high impact use cases addressing specific pain points rather than implementing AI for its own sake. This means:

  1. Start with workflow mapping before AI implementation
  2. Identify natural integration points in daily legal work
  3. Match interfaces to user needs, not the other way around
  4. Build trust through transparency appropriate to the use case

Our research with The Sense Collective—Factor's community where legal teams from Microsoft, Adobe, and other companies share AI strategies—tells an interesting story. While only 18.9% of legal professionals feel confident with enterprise AI tools, 27.3% of Sense Collective members do. Better yet, these teams have eliminated the extreme skills gap affecting nearly a quarter of the market (those who "really need help" with AI).

What's their secret? The numbers point to a few key differences: they build 46% more internal AI tools, adopt 22.4% more specialized legal AI solutions, and collaborate 23.8% more with IT. Bottom line: when teams focus on both user needs and tech integration, adoption improves across the board.

AI adoption is not just about having the sexy (or easy to build) tool. Like all tech that preceded it, product adoption is about understanding your users, their workflows, and designing experiences that meet them where they are. 

While developing in-house expertise is vital, many legal teams find the most efficient path forward includes partnering with AI-first service providers who specialize in specific legal functions. These partnerships can complement internal efforts and accelerate implementation, helping teams respond to growing business demands while still adhering to the user-centered design principles that drive successful adoption.

Nicole Bradick is Head of Global Innovation at Factor Law and former CEO of Theory and Principle, a legal technology design and development firm. Join her at the CLOC Global Institute on Tuesday, May 6th at 8am for a panel discussion "Beyond AI Implementation: Leveraging UX to Accelerate Adoption".