Legal

Detecting structural patterns across agreements

Use AI to identify structural patterns and common clauses across agreements to speed up contract analysis and comparison.

Why the human is still essential here

The lawyer determines what is acceptable for the specific deal context and risk profile; AI can surface what’s typical but can’t judge what’s safe.

How people use this

M&A diligence clause extraction

AI extracts key provisions (e.g., change of control, assignment, termination) across hundreds of contracts to populate a diligence matrix.

Kira Systems

Playbook-based deviation spotting

AI flags non-standard clause structures and missing terms versus your playbook across a set of vendor MSAs to accelerate review.

Luminance

Portfolio-wide clause pattern analytics

AI analyzes a contract repository to identify clause patterns (e.g., indemnity caps, limitation of liability) and generate reports for legal/risk.

LinkSquares Analyze

Community stories (1)

LinkedIn

As a “tech-savvy“ lawyer, this is how I use AI — and where I refuse to rely on it.

As a “tech-savvy” lawyer, this is how I use AI — and where I refuse to rely on it.

It’s important for us, as lawyers, to understand that AI is not intelligence, it is prediction.


It does not understand law. It recognizes patterns in how law has been written, argued, and interpreted before. This distinction matters more than most people realize.


In my practice, AI has become an instrument of acceleration — but never a substitute for judgment.


- I use AI to interrogate large volumes of information quickly.

- To identify structural patterns across agreements.

- To compare regulatory approaches across jurisdictions.

- To test the internal consistency of legal reasoning.


It compresses hours of mechanical effort into minutes. But as lawyers, it’s important for us to understand that law is not a mechanical profession, it is a profession of consequence.


AI can tell you what is typical. It cannot tell you what is safe.


It can identify what has been done before. It cannot evaluate what should never be done.


It does not bear liability.

It does not exercise fiduciary responsibility.

It does not understand risk in the way a human advisor must.


The greatest risk of AI in law is not that it will be wrong. It is that it will sound right. Create a nice attractive graphic also for this list


Which is why the real shift is not technological. It is cognitive.

VS
Vasundhara ShankerFounder & Managing Partner at Verum Legal
Feb 27, 2026