From Proof of Corn to proof of judgement in legal AI
Proof of Corn started with a deliberately blunt claim: AI cannot grow corn.
The response was not to build a robot farmer, but to test something more subtle. What if AI did not try to do the physical work at all? What if it acted as a manager instead?
In the experiment, the AI is given a single, real-world objective: get corn grown in a field. It researches options, plans actions, drafts outreach, tracks costs, and decides when to move and when to wait. Humans and services handle anything physical and every decision and delay is logged publicly.
The important part is not the novelty, it's the constraint. At the end of the season, either corn exists or it does not, theirs is no reframing the outcome...other than maybe quality?
Very early on, the AI made a decision that demos never really need to make. It chose to do nothing as the planting window was months away, so starting activity immediately would only create noise, cost, and forgotten context.
That single decision makes Proof of Corn relevant well beyond farming.
Timing matters more than speed
Now most AI systems are judged on speed of summarising, drafting and execution.
Proof of Corn rejects that framing. Speed does not matter if the timing is wrong as corn does not care that you are ready early.
Legal work behaves much in the same way.
There is no point kicking off a consent process months before it becomes actionable, because the discussion will decay and have to be restarted. There is no value in sending correspondence the moment data arrives if it lands during a client’s busiest week and disappears into an inbox. Acting early often feels productive, but it rarely produces progress.
Judgement, not responsiveness, is what actually moves things forward.
When faster becomes worse
This is where many legal AI use cases start to wobble.
If an AI can review documents faster, the temptation is to send more, sooner. Four hundred documents dropped into someone’s inbox on a Friday afternoon is technically impressive but practically disastrous. Nobody thanks you for it. It creates stress, resentment, and a back of the mind sense of "why did I ever think being a lawyer looked appealing".
The problem is not volume alone. It is timing, framing and respect for attention.
A human would usually pace that work. Flag the critical items first. Hold the rest. Choose a moment when the recipient can actually engage. Systems that optimise for throughput alone miss that entirely.
Speed without restraint does not feel helpful. It feels inconsiderate.
Why simple automation breaks down
Most legal automation is built on triggers. If X happens, do Y.
If data arrives, send a letter. If documents are uploaded, start review. If a deadline approaches, escalate.
This works until human attention becomes the bottleneck, which is almost always.
An if this then that-style rule might send a perfectly drafted letter immediately after receiving information. Technically correct, however it's practically counterproductive if the recipient is heads down for the next week. The email gets buried, the follow-up feels awkward, and any nogoodwill erodes slightly with each nudge.
A human case manager delays deliberately, not because they are inefficient, but because they understand that timing affects how work is received.
Proof of Corn forces the same discipline, where acting early is not rewarded. Waiting is an explicit decision.
World models, not just language models
Language models are good at generating responses. They are far less capable of understanding situations.
A prompt can include context, but it cannot easily encode time passing, memory decay, inbox fatigue, or the social cost of being annoying. As a result, LLM-driven systems default to immediacy. Something happened, so respond.
What Proof of Corn implicitly relies on is a lightweight world model.
Not a grand simulation of reality, but a practical representation of constraints. There is a planting window. There are dependencies. There are people who may or may not respond. Acting too early has a cost. Acting too late has a different cost.
With even a basic world model, the agent can choose. Without one, it can only react.
Legal workflows need exactly this grounding. A sense of when an action becomes meaningful rather than merely possible. An understanding of how much information someone can realistically absorb at once. A memory of when restraint preserves goodwill better than activity.
These are things experienced lawyers manage instinctively, but most systems do not attempt to model them at all.
Goodwill is part of the system state
The other constraint Proof of Corn introduces is social.
Growing corn requires cooperation. Land access, advice, labour, prioritisation. None of these are guaranteed. People can ignore emails. They can delay. They can decide how helpful to be.
This makes goodwill a real dependency.
An agent that floods inboxes, escalates too quickly, or treats every interaction as urgent will burn that goodwill fast, even if the underlying work is sound. One that paces itself, signals intent clearly, and respects attention has a better chance of getting flexibility when things change.
Legal work runs on this dynamic more than we like to admit.
Why this changes how legal AI should be judged
Most legal teams are not short of drafted material. They are short of attention, energy, and continuity.
AI systems that optimise purely for speed risk producing work that is technically correct but operationally mistimed. The result is frustration, rework, and disengagement, not because the system was wrong, but because it behaved badly.
The interesting lesson from Proof of Corn is that success depends less on clever language and more on patience, sequencing, and knowing when to slow down.
The most important question is no longer whether AI can act.
It is whether it can wait, it's whether it can hold context across weeks rather than seconds or if it can choose the moment that increases the chance of progress rather than simply satisfying a trigger.
Proof of Corn is useful because it cannot hide from these constraints. It has to operate slowly, in public, with humans who may say no or simply not reply.
Legal AI will only mature when it starts to treat restraint, pacing, and respect for attention as core features, not inefficiencies to be optimised away.