How a SATs Question Unlocks Better Prompts for Legal AI

How a SATs Question Unlocks Better Prompts for Legal AI

On Monday I was an invigilator at my son's school, watching over the SATs grammar paper. Flicking through the questions as the exam started, I saw this one:

"Identify the fronted adverbial in this sentence."

I stared at it like it was quantum physics. English lessons in the 90s never drilled me on that, we got commas, full stops, and maybe the odd apostrophe. No one mentioned fronted anything.

Walking home (after Googling what it meant), I was thinking about that term I blanked on, fronted adverbials, is the sort of thing that probably causes trouble in contract reviews. Not just for people, but also for AI.

We use them all the time without thinking:

  • "After completion,"
  • "In certain circumstances,"
  • "To the extent that"

They seem like harmless scene setting, though actually, they carry meaning that affects the whole clause. Lawyers might clock that, will a language model?

This isn’t about explaining grammar to lawyers. It’s about being deliberate when we build prompts and tools. If we tell a model to find vague clauses, we need to say what vagueness looks like. Not just hope the model gets it by vibe.


What even is a fronted adverbial?

A very good question... after a quick Google between the Grammar and Spelling exams, it’s the bit that tells you when, how, where or why something happens. It comes before the subject and verb, often with a comma after it.

After the audit, the Buyer may terminate.

Remove the front bit and the sentence still works, but that phrase "after the audit" changes everything. It introduces a condition, a limit, a dependency and in contracts, that’s where risk often hides.


Contract English loves this one weird trick

Contracts love to put important qualifiers up front. Looks clean. Adds context, but it quietly reframes the obligation.

Done well, it softens a hard duty. Done badly, it creates ambiguity.

A favourite example:

As soon as reasonably practicable, the Supplier shall deliver the replacement parts.

It reads like a commitment. In reality, it is a flexible standard tied to context. What is “reasonably practicable”? When does that clock start? You can argue it either way. That is exactly the point.

Some phrases like “as soon as reasonably practicable” are not vague by accident. They are purposefully flexible. In many commercial contexts, lawyers do not want a hard-coded deadline because the reality of performance cannot be predicted up front.

What matters is that the party acts promptly in context. If there is disagreement, a court will decide what that looked like. So the issue is not that the phrase is wrong. It is that when a model or a reviewer hits language like that, they need to know it is a recognised standard, not a gap needing to be filled.

If we want LLMs to be genuinely useful, they need to do more than flag this sort of clause as vague. They should understand why the phrase was likely used, what flexibility it introduces, and whether that makes sense in the clause’s context. Spotting a risk is one thing, actually recognising deliberate ambiguity is what separates good legal tooling from noisy red flags.

Now compare this:

The Supplier shall deliver the replacement parts within five business days of written notice.

Clear. Measurable. No vague opener.

The first version is so common, it can slip through. AI won’t flag it unless you tell it what to look for.


Other fronted risks hiding in plain sight

1. Conditional Openers

Subject to clause 12, the Buyer may terminate.
Unless the Supplier objects in writing, payment shall be made

The whole clause is gated, miss the condition, misread the rule.

2. Concessive Openers

Notwithstanding anything to the contrary, this Agreement may be terminated early.

This one overrides everything before it. If you don’t trace what it’s cancelling out, protections vanish.

3. Purpose or Reason
In order to comply with applicable law, the Customer shall...

Looks helpful, but it can limit the duty to a single reason, not a broader obligation.


Where LLMs come in

When we build tools around legal language, we’re not just telling the model to read carefully. We’re giving it the patterns a trained reviewer knows to watch for.

"Find vague clauses" is a vague prompt. The model has to guess what you mean and that guess costs time, tokens, and usually gives a fuzzy result.

A better approach is structure:

  • Highlight conditional openers like "subject to..."
  • Pull out temporal triggers like "upon termination..."
  • Flag softening phrases like "to the extent reasonably practicable"

That way, we’re not asking the model to make judgment calls. We’re telling it what form to look for, and why it matters.

Let's make that prompt less vauge

Find all sentences starting with a fronted adverbial. Highlight the opener, summarise what it modifies, and flag any use of undefined terms (like 'reasonable', 'appropriate', or 'as soon as possible') or open conditions that shift timing or scope without clear limits.

That line does more than a keyword search. It tells the model how to think, which then gives you better results, faster.


Quick pattern to find them

If you want a rough check, regex is a cheap and easy way

^[A-Z][^,]{0,80},

It flags sentences that start with a phrase and a comma. Not perfect, but useful. Then you ask:

  • What’s being introduced?
  • Is it defined?
  • Could it affect timing, obligation, or scope?

Why this all matters

That SATs question threw me, but it kept with me all day because the pattern behind it is everywhere in contracts. Once you name it, you start seeing how it shifts meaning, and if we’re building tools for contracts, that’s gold. It helps us move from instinct to instruction.

Grammar isn’t fluff. It’s a guide to structure, and structure is what AI needs if we want it to behave like the junior we trust, not the chatbot we have to babysit.

Here are the patterns and thoughts I keep having when designing for this kind of risk:

  • Language models won’t properly catch vagueness unless you define it first
  • Fronted adverbials shift meaning in ways that affect legal outcomes
  • Phrases like "subject to", "in order to", and "as soon as" all carry risk
  • Structured prompts reduce model drift and improve cost efficiency
  • Vague prompts increase token usage and usually need clarification
  • Regex isn’t perfect, but it helps surface patterns worth reviewing
  • You’re not teaching the model to think like a lawyer, you’re giving it instructions that mirror how good lawyers read
  • Spotting the grammar behind vague clauses helps you write better prompts, and build better tools

Next year I’ll be ready for that SATs paper, maybe. More importantly, I’ll be ready the next time we need to build a prompt that cuts through the vagueness and gets to the point.