The Rise of Vibe Coding: Why Legal Teams Are Quietly Building Their Own AI Tools (And What Firms Should Do About It)

When someone wants to try out AI at work, the usual routine kicks in. Ask IT, fill out a form, wait for procurement, let risk raise a few red flags, then watch everything grind to a halt. Somewhere in that loop, a PowerPoint from 2022 gets passed around with a big red warning about hallucinations. After that, the idea quietly dies.
That pattern is starting to break.
AI tools are now so accessible that a lawyer with a bit of curiosity and a browser can build a working app without writing any code from scratch. Tools like Claude will generate full Python scripts on request. Streamlit turns them into apps. Replit or AWS handles the rest. You no longer need a dev team to make something usable.
This isn’t classic shadow IT. It’s stealth AI. Tools that look like personal projects but perform legal work and sit completely outside governance.
From Prompt to Product in a Few Hours
Take a lawyer buried under contract summaries. They don’t want to wait months for a formal solution. So they open Claude and type:
“Can you write a Python script that takes a .docx, splits it into chunks, sends each chunk to the OpenAI API for summarisation, and combines the results into one summary?”
Claude writes the script. The lawyer opens GitHub Codespaces or Replit — entirely in the browser, using their personal account. They paste it in, hit run. It works.
Next step:
“Wrap this in a Streamlit app so I can upload the document and view the summary in a web browser.”
Claude does that too.
Now they’ve got a working AI summariser. It’s hosted externally — maybe on Hugging Face, maybe a personal AWS free-tier box. The app runs quietly, fully outside the firm’s infrastructure. No installs, no security scans, no approval flows. It never touches the work laptop.
Unless the firm is logging outbound API traffic in detail, there’s no way to know it exists.
This isn’t about being sneaky. It’s about getting the job done.
Tools Legal Teams Could Realistically Build
These aren’t wild hypotheticals. Every one of these is realistic today, with a few prompts and basic tweaks.
1. Clause checkers
Upload a contract, extract clauses by type, and flag if a key clause is missing or looks unusual. No complex NLP required. A few prompts in Claude and a couple of hours gets you a working prototype.
2. GDPR policy comparators
Take a firm's internal data policy and compare it against ICO guidance or the UK GDPR. Highlight discrepancies or missing sections using fuzzy matching. Claude can scaffold most of the logic for you.
3. Redline simplifiers
Compare two versions of a document and have the model describe the key differences in plain English. Useful when track changes are messy or legal nuance needs a closer look.
4. Email summarisation batch tools
Drop in a folder of emails, and get a timeline or summary of key communications. Feasible using OpenAI or Claude APIs and a lightweight UI.
None of these require deep technical skills. You don’t need to write a line of code if you're willing to describe clearly what you want and iterate a bit. With the right model, the whole process feels closer to briefing a junior than building software.
Why This Matters
Legal teams are not trying to bypass rules. They are trying to stay productive in systems that aren’t keeping up. If internal tools are slow to arrive or don’t do what’s needed, people will find workarounds.
This creates a governance gap. Work happens in silos, with no visibility or shared standards. Tools handle sensitive data but might lack basic safeguards. Ideas that could benefit the wider firm stay hidden.
Blocking ChatGPT or adding AI sites to the firewall list doesn’t fix it. Most of these tools route API calls through other platforms, or run on private machines. There’s nothing to block unless you're inspecting encrypted traffic and logging every connection, which few firms do.
What Firms Can Do
The answer isn’t lockdown. It’s a shift in mindset. If people are already building these tools, bring them into the fold. Create a space where that energy is visible and supported.
Here’s how:
1. Build a space for experimentation
Create somewhere inside the firm where lawyers can test AI safely. Set up an internal Streamlit server, offer controlled API access, or provide a sandbox with logging built in. Give teams room to try things without going rogue.
2. Create a lightweight approval path (with clear expectations)
Not everything needs a full vendor process, but there should be a baseline. Before someone builds or adopts a tool, they should understand what any legal AI tool needs to have:
- Auditable use
- Logging of prompts or actions
- Clear model ownership
- No sensitive data stored externally unless approved
- Control over input and output retention
This doesn't mean firms should approve the first AI product someone finds online. It means they need to define what “acceptable” looks like, even for experiments.
Once that’s done, write a short internal playbook. Make it easier to get to “yes” without bypassing controls entirely.
3. Back low-code, prompt-based tools
Lawyers don’t need full dev environments. They need flexible workflows, so give them access to tools like Claude, GPT, and Streamlit in a way that’s trackable and secure. Encourage reuse of internal components so they’re not starting from scratch every time.
4. Visibility without surveillance
Add light telemetry to in-house tools. Let teams register what they’re building and track usage trends. This isn’t about monitoring individuals. It’s about making sure firm IP isn’t locked inside someone’s private AI app.
5. Support internal champions
Find the people already doing this. They’re probably not shouting about it. Involve them in your internal AI strategy. Let them shape how the firm adopts these tools instead of sidelining them.
Legal AI doesn’t always arrive in big launches. Sometimes it shows up as a quiet Streamlit app running on a personal AWS instance, solving a real problem that no formal tool ever got to.
The question isn’t whether people are building these tools. They already are. The question is whether your firm knows, supports it, and learns from it, or whether you'll find out months too late that your best AI tool was never on the roadmap.
It's happening either way, better to be part of it I reckon.