May 11, 2026

Legal AI tools are often marketed as accelerators for training, promising to make junior lawyers more proficient by providing quicker answers and clearer summaries. However, recent findings suggest these tools might be doing more harm than good in the initial stages of a lawyer's career.
In a series of empirical classroom pilots conducted by Product Law Hub, an AI-based product law coach named Frankie was tested to examine its impact on law students and early-career lawyers. The study aimed to analyze how these groups interact with AI when learning judgment-based legal skills. The results, which were derived from both quantitative data and qualitative interviews, painted a worrying picture for law firms that rely heavily on AI for training.
Junior lawyers are known to struggle with confidence and the ability to frame problems effectively. They often search for the "right" answer rather than learning how to navigate through uncertainty and assess risks in different scenarios. AI tools, by providing immediate solutions, circumvent the critical thinking process necessary for developing these skills. This not only shortens their engagement with complex legal issues but also diminishes their depth of understanding.
The pilot study showed that when AI delivered quick answers without engaging the students in reasoning, the students' engagement levels dropped, they spent less time per session, and followed up less frequently. Many participants reported feeling less confident and more dependent on the AI's outputs, which they accepted without fully understanding the underlying reasoning.
Conversely, when the AI required students to articulate their reasoning before providing answers, engagement increased significantly. Students were more likely to revise their thinking and defend their conclusions, demonstrating that the timing and method of AI interaction are crucial in educational settings.
This phenomenon suggests a broader implication for law firms: if AI tools discourage deep thinking in low-stakes educational settings, they are likely to replicate this effect in high-pressure, billable environments. The concern is that junior lawyers might become overly reliant on AI, which could later affect their ability to articulate their reasoning to partners, clients, or regulators.
The issue, as highlighted, is not with AI itself but with how it is deployed. AI can be beneficial if it acts more like a mentor than an oracle, asking probing questions and requiring junior lawyers to engage actively with the material. Such interactions keep lawyers cognitively involved and help in building judgment, which is essential in the legal profession.
The findings from the classroom data indicate a need for a shift in how law firms use AI in training. While the efficiency of AI is appealing, it is critical that it does not compromise the judgment skills that lawyers need to develop. Firms need to reconsider what they are optimizing for: quick results or the cultivation of skilled, thoughtful legal professionals.
In conclusion, while AI has the potential to transform legal training positively, its benefits depend significantly on thoughtful implementation that prioritizes cognitive engagement and skill development over mere speed.