Educational content; not legal advice. CLM software pricing negotiated case-by-case. ABA and jurisdiction-specific ethics rules apply. Verify with qualified counsel. See full disclosure.

Category Definition — Reference

What Is Agentic Contract Review? A 2026 Taxonomy for Legal, Procurement, and Ops Teams

Last verified April 2026

The term "agentic contract review" is being used loosely. Vendors apply it to any feature that automates more than one step; marketing teams reach for it whenever a product update involves a large language model. Meanwhile, legal teams trying to make a vendor decision are left sorting through a taxonomy that nobody has yet written clearly. This page writes it clearly.

"AI contract review" is the broad umbrella: any use of artificial intelligence over contracts, from a basic OCR tool that extracts text from a scanned PDF to a multi-step agent that can receive a contract, analyse it against a playbook, generate redlines, route the flagged clauses to legal, and send a counter-proposal to the counterparty's system, all without human step-in for each decision. "Agentic contract review" is the narrower, higher-capability end of that spectrum: genuinely autonomous workflow steps, tool-use, multi-document reasoning, and audit trails on agent decisions. Most tools on the market in April 2026 are somewhere in the middle.

Section I: The Three Tiers of AI Contract Review

Tier 1: OCR and keyword extraction (pre-LLM, legacy)

The first generation of AI contract review tools, from roughly 2014 to 2020, used optical character recognition to convert scanned contracts into searchable text, then applied rule-based pattern matching and supervised machine learning to extract specific data points: dates, dollar amounts, party names, governing law clauses. Kira Systems, founded in 2011 (now part of Litera), was the category pioneer. The accuracy of Tier 1 tools depends heavily on training data and is highly sensitive to non-standard clause language.

Tier 1 tools are still in production use, often embedded inside document management systems. They are not what most vendors mean when they advertise "AI" in 2026. If a vendor cannot clearly explain whether their AI is LLM-based or rule-based, ask directly.

Tier 2: LLM-assisted review (current dominant paradigm)

From 2022 onward, the introduction of large language models, specifically GPT-4, Claude, and the models fine-tuned on top of them, transformed what was possible with contract review AI. LLMs can read a contract in context, understand that a clause is non-standard relative to market practice without being explicitly trained on a rule for that clause type, generate redline suggestions in the negotiating style the firm has calibrated, and explain their reasoning in plain English.

The current dominant paradigm, covering Evisort, LinkSquares Analyze, Ironclad Jurist, SpotDraft, and most of the mid-market CLM tools, is Tier 2: LLM-assisted review where a human reviews and approves AI outputs at each material step. The AI is genuinely useful; it compresses 60-minute contract reviews to 15 minutes in well-calibrated deployments. But a human remains in the decision loop. The AI flags; the human acts.

Most 2026 deployments are Tier 2. Tier 2 is production-proven, governance-friendly, and appropriate for almost all regulated industries in the current regulatory environment.

Tier 3: Agentic review (emerging, demo-ready, production-rare)

Tier 3 is the category this site is named for, and the category that vendors most aggressively overclaim in their marketing. A genuinely agentic contract review workflow has all of the following properties: multi-step autonomous action (the agent can take a sequence of actions without human approval at each step), tool-use (the agent can call external APIs, read linked documents, query a clause library, and write outputs to other systems), self-correction (the agent can recognise when an analysis is uncertain and flag it, or attempt a different approach), and audit trails (every agent decision is logged with evidence of the reasoning, so a human can review and overturn any step).

Luminance OS, launched in 2025, is the most credible production-ready Tier 3 product in the market as of April 2026. Harvey's agent tier makes similarly strong claims with BigLaw-level security posture. Ironclad's autopilot features represent a hybrid: genuine automation of certain routine steps within an otherwise human-supervised workflow. Robin AI's agent mode is compelling for contract-review-specific workflows at mid-market scale.

Honest note: most enterprise deployments in April 2026 are calibrating Tier 3 capabilities in sandboxed pilots, not running fully autonomous agent workflows on live contracts. The technology is ready for production in narrow, well-defined use cases (high-volume NDAs, standard MSA intake). It is not yet recommended for complex, bespoke negotiation workflows without human oversight.

Section II: CLM vs Standalone Contract Review AI

Contract lifecycle management (CLM) software covers the full arc of a contract: drafting, collaboration, negotiation, signing, storage, obligation tracking, and renewal. When a vendor describes their product as a CLM, they mean it handles the entire workflow, not just the review and analysis phase.

Standalone contract review AI focuses on the analysis task: read this contract, extract the key data, flag deviations from playbook, suggest redlines. It is a tool that sits in the review step of a workflow; it does not own the workflow itself.

The 2026 reality is that the distinction is blurring. Ironclad (a CLM) has added strong review-AI features. Harvey (a standalone AI platform) is adding workflow elements. Evisort (which started as standalone review AI) has built a significant CLM workflow layer on top. Juro launched as a modern CLM and has added an agent layer. The categories are converging. But the architectural heritage matters: a CLM vendor will invest in workflow depth and enterprise integration; a standalone review AI vendor will invest in model quality and task-specific accuracy.

Section III: The Vocabulary

Understanding the terms vendors use is the first step to evaluating their claims:

Redlining

The process of proposing changes to a contract document, typically tracked with strikethroughs and insertions. AI redlining means the system generates suggested edits automatically, in the negotiating style calibrated by the legal team, without a human drafting each change.

Clause extraction

Identifying and isolating specific clause types from a contract document. A mature clause extraction system can pull every limitation-of-liability clause from 10,000 contracts and present them as comparable outputs. Kira pioneered this; every major CLM now includes it.

Metadata extraction

Pulling structured data from unstructured contract text: party names, effective dates, contract value, governing law, notice periods. The foundation of any contract database or CLM repository.

Risk flagging

Identifying clauses or provisions that deviate from acceptable standards and categorising them by severity. A good risk-flagging system distinguishes between 'market-standard but unfavourable' and 'disqualifying' and surfaces that distinction to the reviewer.

Obligation tracking

Post-signature monitoring of commitments created by the contract: payment deadlines, SLA targets, audit rights, renewal notice windows, security incident notification periods. The high-ROI use case that most vendors undersell. See our dedicated page on obligation tracking.

Deviation detection

Comparing a clause to the organisation's playbook or market standard and flagging where the inbound contract differs. The more precise the playbook, the more precise the detection.

Playbook enforcement

Using AI to automatically apply negotiating positions from a predefined playbook: if the counterparty's liability cap is below 12 months of fees, auto-redline to 12 months; if their indemnification is mutual, accept; if their IP assignment is broad, flag to senior counsel.

MCPs and tool-use in legal agents

Model Context Protocols and function-calling allow AI agents to use external tools: querying a clause library, writing a redline to a document, sending a notification via Slack, or updating a CRM record. This is what makes Tier 3 agentic behavior possible.

Section IV: What "Agentic" Is Being Abused to Mean

The word "agentic" has joined "AI-powered", "intelligent", and "smart" in the vocabulary of vendor marketing that signals nothing. In 2026, almost every contract review vendor applies it to any feature that automates more than one step. A tool that auto-populates a metadata field after a contract is uploaded is being labelled "agentic." It is not.

Genuine agentic behaviour in a contract review tool looks like this: a new MSA arrives in an intake queue at 9pm. The agent reads it. It queries the clause library for the company's playbook on MSA limitation-of-liability language. It identifies four deviations. For three deviations at standard deviation levels, it auto-redlines and inserts the playbook language. For one deviation (the counterparty is requesting uncapped liability), it escalates to the senior commercial counsel with a three-sentence explanation and a suggested response. At 9am, the senior counsel reviews one escalation rather than reading the full MSA. All four agent decisions, including the three auto-redlines, are logged with the clause text, the playbook reference, and the confidence level.

The tools that most credibly deliver this in April 2026 are Luminance OS and Harvey's agent tier. Both have genuine multi-step autonomy, documented audit trails, and production deployments at enterprise scale. Ironclad's autopilot is a credible hybrid. Everything else calling itself "agentic" in the market should be tested against the above definition before belief.

Section V: What to Read Next

Now that you have the taxonomy, your next steps depend on where you are in the evaluation process:

  • Full capability matrix — 13 platforms, 22 capabilities. The reference page.
  • Pricing models — If you want to understand the cost spectrum before evaluating platforms.
  • FAQ — If you are worried about accuracy, privilege, or bar ethics compliance before proceeding.
  • Harvey deep dive — If the "genuinely agentic" question matters to your use case.
  • For GC Office — If you are the decision-maker building the vendor memo.
Educational content; not legal advice. Last verified April 2026. Verify definitions and vendor capabilities directly before procurement.