March 16, 2026

In the rapidly evolving field of legal technology, the prevailing focus has been on developing more sophisticated AI models. The general belief is that the more advanced the technology, the more useful it will be. However, empirical evidence, particularly from a series of classroom pilots run by Product Law Hub, suggests otherwise.
Legal AI has not fallen short due to a lack of advanced models, but rather because the fundamental approach is flawed. The most successful legal AI systems don't just automate tasks; they act more like mentors than mere tools.
During the Product Law Hub pilots, an AI-based legal coach named Frankie was used to observe how users develop judgment-based legal skills. The findings, supported by both quantitative data and qualitative interviews, revealed that effective learning outcomes stemmed from collaborative interactions rather than from the AI's ability to quickly deliver answers.
The pursuit of automation in legal AI aims to reduce human effort and deliver fast results. However, this approach proves inadequate when dealing with tasks that require judgment. Judgment demands context, prioritization, and explanation, elements that are often lost when AI systems prioritize delivering quick conclusions.
The pilots demonstrated that when the AI assumed an authoritative role, user engagement and learning decreased. Users deferred to the AI’s conclusions without developing their reasoning skills. Conversely, when the AI engaged users by asking questions and prompting them to articulate their reasoning, the results were significantly better. Users not only engaged more deeply but also retained information more effectively.
The distinction between authoritative and collaborative AI is crucial. Authoritative AI, which delivers quick and definitive answers, might seem safer and more efficient, but it actually fosters dependency and stifles deeper learning. On the other hand, a collaborative AI, which mimics the mentorship model found in traditional legal training, encourages active learning and independence.
This shift from seeing AI as a model that automates to one that mentors aligns with the natural way lawyers learn and develop expertise. Legal AI should not aim to replace the functions of a lawyer but to enhance the cognitive processes lawyers use.
The implications of these findings are clear for both developers and buyers of legal AI technologies. Developers should focus less on the capabilities of the AI and more on how it interacts with users, especially when they are uncertain or wrong. Buyers should look beyond the automation capabilities of the system and evaluate whether it helps improve lawyers' thinking over time.
In conclusion, the future of legal AI lies not in its ability to automate but in its capacity to relate and adapt to the learning needs of lawyers. By embracing a mentorship role, legal AI can become a transformative tool in the legal industry, ensuring that it not only fits into but also enhances the way legal professionals work and learn.