When AI does the grunt work, how do we learn to lead?

When AI does the grunt work, how do we learn to lead?

In the rush to adopt artificial intelligence and automate routine legal or engineering tasks, we may be winning the race for efficiency, but losing the marathon of expertise. If junior lawyers never do the "grunt work" (document review, contract drafting, error spotting) because AI takes it, who becomes the partner of tomorrow? More importantly, how will they develop the judgment, intuition and domain knowledge that clients pay for?

Clients aren’t interested in how you train future talent, they care about outcomes, risk control and seamless service, but if you bypass the developmental pathway that produces those outcomes, the risk isn’t only career stagnation, it’s the erosion of your firm’s capability. This is not a secondary issue, it is foundational.


The work that taught us

Historically, those early on in their careers in law learn with high‑volume, repetitive tasks. These tasks weren’t glamorous, but they were essential. They exposed the junior to patterns: clause‑language, negotiation styles, risk triggers; architecture trade‑offs, bug‑types, dependency‑issues.

Now, AI is absorbing many of those tasks. Some research from Saga International covering when document review, drafting and due‑diligence are handled by AI, the "traditional learning structure is disrupted." (sagalegal.io) and The Law Society summarises a similar trend that more automation means fewer opportunities for beginners to engage with the raw material of their career. (lawsociety.org.uk)

The outcome is that an associate may review outputs rather than generate them. Good for speed, poor for depth.


The Case for Controlled Inefficiency

Here’s the counter intuitive truth: introducing some inefficiency today may yield greater capability tomorrow. If you design automation purely to maximise output, you risk flattening the learning curve, but if you accept a small drag on efficiency in exchange for embedded learning, you build longer‑term structural resilience.

Consider a second‑seat associate reviews a contract type for the first time. Instead of letting an AI spit out the marked‑up version, the system could scaffold the task: ask the associate to identify key risk clauses, propose edits, then compare with the AI. That reflection forces the learning loop.

This is not about turning back the clock (you won't need to start photocopying to learn that skill) , it’s about designing assistance that modulates based on user maturity. Efficiency remains important, but it is not the sole metric as skill growth, judgment training and institutional knowledge must sit alongside speed.


Designing Proactive Personal AI That Learns YOU

What if the AI doesn’t just automate, but adapts to you? Here are design principles for an AI that recognises your experience level and adjusts its support accordingly.

  • Experience tracking: the system knows this is your first review of a share‑purchase agreement vs your 1,400th.
  • Learning nudges: in early tasks, the AI might withhold full automation and instead guide questions: "Which clause in this agreement most likely triggers post‑closing indemnity? Why?" Only after the user responds does it show the full answer.
  • Reflection prompts: after task completion the AI says: "You missed the vendor‑warranty carve‑out you flagged last time. Compare your edit with the expert‑standard version and reflect on why the difference matters."
  • Alert thresholds: if repeated oversight patterns emerge (say mis‑spotting a material risk clause), the system recommends a short mentor‑check or review session.
  • Data capture for skill‑progression: the system logs your interaction history (with appropriate privacy/ethics safeguards) so you can revisit how your judgment evolved and mentors can tailor coaching.

From a governance standpoint, this approach preserves human‑in‑the‑loop accountability, supports audit‑trail visibility (how did the junior engage; what did the AI suggest; what did they accept or reject) and aligns with regulatory expectations around legal service quality.


Why Clients Indirectly Benefit

It may seem that reducing billable hours for juniors is disadvantageous, but consider the inverse: if you have a cohort of lawyers who never did the foundational work, the firm may still deliver today, but at what cost of capability tomorrow?

  • A future partner who lacks exposure to early‑stage review may fail to spot subtle risk triggers or counterparty negotiation patterns.
  • Clients don’t pay to develop your talent, they pay for sound judgment, the value of that judgment is the differentiator.
  • Firms that embed these "learning aware" AI systems build deeper benches of talent, reduce variability in delivery quality and preserve institutional memory. because then training becomes a competitive edge, not a cost centre.

Here are practical steps to operationalise this:

  1. Audit the junior‑task matrix: map tasks currently used for training (like due diligence review, contract red‑lining) and identify which are now outsourced to AI.
  2. Define experience bands: articulate user‑levels (new associate, mid level associate, senior then combined with experience with the area of law) and desired outcomes for each.
  3. Select AI tools with adaptive modes: ensure your vendor or in‑house solution allows toggling support level and measuring interaction.
  4. Embed review‑points and reflection check‑ins: build prompts, dashboards or mentor signals into workflows rather than just running the AI.
  5. Track capability metrics: don’t just measure hours saved. Measure things like time‑to‑independent review, error‑rate on first‑drafts, oversight escalation frequency.
  6. Governance & transparency: make sure the system logs decisions, flags where human review occurred (or didn’t) and retains a revision‑history for audit and training.
  7. Change management: communicate to both trainees and senior lawyers that this is not slower work, it’s structured development. Mentors must remain committed to coaching rather than outsourcing oversight entirely.

Risks and Mitigations

Obviously though we can't just ignore all the risk here:

  • Over‑automation could hollow out the entry‑level roles entirely. The bottom of the pyramid disappears, creating a bit of void in the talent pipeline.
  • If learning‑aware modes are not implemented or disabled, junior staff simply see the system as "the AI does it" and disengage from thinking.
  • AI outputs remain imperfect with hallucinations, incomplete reasoning and missing context.
  • If you rely on AI to teach without human mentorship, you risk replacing one blind‑spot with another.

So we could mitigate this a few ways, by preserving structured human mentoring, enforcing review protocols, designing AI assist‑modes with learning scaffolds not just automation toggles, and measuring both output quality and skill‑progression.


Efficiency will always matter, but if speed becomes the only target, the profession risks ageing its skill‑base prematurely. What we need instead is a tweaked view of automation plus apprenticeship. AI should not just do, it should teach. The next generation of lawyers must learn how to think, not only how to click.

Firms that embed proactive personal AI, ones that adapt to the user’s experience level, prompt reflection, and track progression will not only deliver value today but build the capability of tomorrow. That capability is what clients will notice, perhaps long before the next hire gets their first promotion.

If you’re leading legal‑tech strategy, engineering for legal workflows or talent development, now is the moment to ask this question: Are we automating away learning, or are we automating with learning in mind?