REDLINEv.04.2026
§ Disclaimer. Educational content about AI tooling for legal teams, not legal advice. Consult a qualified attorney for matter-specific guidance. See full disclosure.
§ METHODOLOGYVERIFIED MAY 2026

Methodology: How agenticcontractreview.com Sources, Verifies, and Refreshes Vendor Coverage

Per-vendor source URLs, named-vendor verification discipline, pricing-band construction, capability-matrix sourcing, ABA Opinion 512 overlay, refresh cadence, and corrections process. Independent. Last verified May 2026.

Sources per vendor

Each of the 13 vendors covered on the site has an attribution profile. Primary sources are the vendor's own marketing surface and self-published collateral. Secondary sources are reputable legal-tech reporting and analyst coverage that independently verifies, contextualises, or contradicts vendor claims. Where primary and secondary disagree, the secondary is preferred and the disagreement is shown.

VendorCategoryPrimary sourcesSecondary sourcesCadence
IroncladCLM incumbentironcladapp.com (product), Dynamic Repository and Jurist product pages, customer case studies (Mastercard, L'Oreal, Dropbox, Pinterest)Artificial Lawyer ironclad coverage, The American Lawyer, Bloomberg Law CLM coverage, public investor materials where availableMonthly (cluster head)
LinkSquaresCLM incumbentlinksquares.com (Analyze, Sign, Insight product pages), Smart Values documentation, customer case studiesArtificial Lawyer LinkSquares coverage, ACC / In-House Connect community discussionsQuarterly
EvisortCLM incumbentevisort.com (product), Microsoft 365 / Copilot integration pages, vendor blog, customer case studiesArtificial Lawyer, Microsoft AppSource listing data, Workday and Microsoft partnership announcementsMonthly (cluster head)
SpotDraftCLM incumbentspotdraft.com (product), modern-CLM positioning pages, customer case studiesArtificial Lawyer, India-focused legal-tech coverage, mid-market CLM analyst notes where publicQuarterly
Lexion (DocuSign)CLM incumbent (acquired)docusign.com Agreement Cloud pages, post-acquisition product positioning, DocuSign press releasesDocuSign earnings calls and investor materials, Artificial Lawyer post-acquisition coverageQuarterly
Kira (Litera)CLM incumbent (acquired)litera.com Kira product pages, Litera suite positioning, law-firm-focused product marketingArtificial Lawyer Litera coverage, law-firm tech adoption notes, AmLaw Tech pressQuarterly
DocuSign Intelligent InsightsCLM incumbentdocusign.com Insights and Agreement Cloud pages, AI-feature product marketingDocuSign earnings calls and investor materials, Artificial Lawyer DocuSign coverageQuarterly
Harvey AIGenAI-nativeharvey.ai (product pages, customer case studies), Allen and Overy / PwC public deployment announcementsThe American Lawyer Harvey coverage, Bloomberg Law, Artificial Lawyer, OpenAI investment reporting, valuation reporting (Forbes, The Information, FT where public)Monthly (cluster head)
Robin AIGenAI-nativerobinai.com (Reports, Reviews, Draft, Agent product pages), customer case studies, UK and EU data-residency documentationArtificial Lawyer, UK legal-tech press, Robin AI funding-round coverageQuarterly
LuminanceGenAI-nativeluminance.com (Luminance OS agentic-tier pages, Discovery, Diligence product marketing), customer case studiesArtificial Lawyer Luminance coverage, UK legal-tech press, agentic-launch coverage 2025Quarterly
DellaGenAI-nativedella.legal (product), contract-review specialisation pagesArtificial Lawyer Della coverage, European legal-tech pressQuarterly
PactumGenAI-nativepactum.com (autonomous negotiation product pages), customer case studies (Walmart, Maersk)Artificial Lawyer Pactum coverage, supply-chain and procurement trade pressQuarterly
Juro (and Juro Agent)Self-serve / CLM hybridjuro.com (product, pricing page where public, Juro Agent feature pages), customer case studiesArtificial Lawyer Juro coverage, UK legal-tech press, self-serve CLM analyst notesQuarterly

Named-vendor verification discipline

In April 2026 the site adopted a strict discipline after a defamation and litigation-risk review: concrete prices are never paired with named vendors. Pricing surfaces only as qualitative bands, with the source of band attribution cited inline. This rule applies retroactively across every vendor page on the site.

The rationale is twofold. First, concrete prices for negotiated enterprise software are inherently approximate: every published number is one quote on one deal, with seat counts, contract term, modules, and vendor quarter timing all moving the result. Publishing a single concrete number against a named vendor implies a fixed list price that does not exist. Second, even directionally correct concrete numbers risk being read as an authoritative published price by a buyer who then bases a procurement strategy on it. Qualitative bands signal "directional only" in a way that concrete numbers do not.

In practice the discipline plays out as follows. A pricing claim such as "Harvey costs $500,000 per year per seat" is not allowed even where that number is reported in legal-tech press. A reformulation such as "Harvey sits in the premium-enterprise pricing band, with mid-six-figure-per-seat-per-year deals reported as typical" is allowed because the band attribution carries the directional signal without implying a list-price claim against the named vendor.

The rule applies symmetrically to capability claims. A claim such as "Evisort's extraction accuracy is 94 percent" would require the source citation; the page would instead reformulate to "Evisort reports above-90-percent extraction accuracy on standard clause types; verify against your own contract corpus before deployment." The verbatim vendor claim is preserved; the unverifiable specific number is not pinned to the named vendor without provenance.

In scope

  • +AI-powered contract review software with material in-market presence in 2026 across North America, UK, and EU.
  • +CLM platforms that have shipped AI-layer functionality (extraction, clause analysis, redlining, obligation tracking).
  • +GenAI-native contract review tools that have shipped production functionality, not just demos.
  • +Use-case-specific deep-dives where AI material affects workflow quality (NDA, MSA / DPA, clause library, obligation tracking).
  • +Audience-specific buyer guides for in-house GC and procurement teams.
  • +ABA Formal Opinion 512 compliance overlay, EU AI Act overlay, SOC 2 and ISO 27001 posture, US state-bar AI guidance.

Out of scope

  • Pre-LLM rule-based contract analysis tools without a meaningful AI layer.
  • Document signing platforms without contract intelligence (most e-signature vendors).
  • Pure legal research tools without contract review functionality (Lexis, Westlaw without their AI overlays).
  • Law-firm bespoke services (BigLaw managed services using AI internally; not productised software for buyers).
  • Litigation and discovery AI; the site covers transactional contract review only.
  • International law-firm-tier platforms not commercially available to in-house or procurement buyers.

Pricing-band methodology

The site uses four pricing bands across the AI contract review market. Each band has a clear definition, a set of reference vendors, and an explicit set of inclusion criteria.

Self-serve tier. Vendor publishes a starter price on the website. Buyer can transact without a sales call. Annual contract value is typically low four-figure to low five-figure. Reference vendors: Juro starter, SpotDraft starter on their public pricing pages.

Mid-market tier. Quoted only, sales-led, but the typical reported deal size sits comfortably below enterprise floors. Annual contract value is typically mid-five-figure to low six-figure for teams of 5 to 50 seats. Reference vendors: Evisort, LinkSquares mid-tier deployments, SpotDraft enterprise, Robin AI.

Enterprise tier. Quoted only, sales-led, with a meaningful annual floor. Annual contract value is typically low six-figure to seven-figure for full-platform deployments at large enterprise scale. Reference vendors: Ironclad, LinkSquares enterprise, Evisort enterprise.

Premium enterprise tier. Quoted only, with per-seat pricing structurally different from enterprise-tier per-team pricing. Reported deal sizes are at the top end of the market and access is structurally limited to organisations where a single seat displaces multi-hour-per-day attorney billing. Reference vendor: Harvey.

A band move (a vendor crossing from mid-market to enterprise, for example) requires triangulation across at least two independent sources reporting the new deal-size profile. Vendor self-positioning is recorded but does not trigger a band move on its own.

Capability matrix construction

The 22-capability matrix on the platforms-compared page covers each of the 13 vendors across foundational categories (founded year, ownership, price tier, deal-size band) and operational categories (AI redlining, clause extraction, clause library management, risk flagging, playbook enforcement, obligation tracking, agentic capability, integrations, security posture).

Each cell is rated yes, partial, no, or not applicable, with the following criteria. Yes means the vendor has shipped production functionality at parity with the best in the category, evidenced by both vendor documentation and at least one independent customer case study or analyst report. Partial means the functionality exists but is limited in scope, accuracy, or integration depth compared to best-in-class, or is in beta or limited availability. No means the functionality is not shipped or is not on the vendor's roadmap as of the verification date. Not applicable means the category does not apply to the vendor's positioning (for example, agentic capability for a Tier 1 OCR-only tool).

Where vendor documentation is ambiguous or contradicted by independent reporting, the cell is rated to the more conservative position and a footnote explains the disagreement. Cells are not flipped based on vendor objection alone; new evidence is required.

ABA Formal Opinion 512 compliance overlay

ABA Formal Opinion 512 (July 2024) is the canonical US framework for lawyer use of AI tools. It establishes that lawyers using generative AI remain responsible for competence, confidentiality, candour, supervision, fees, and communication with the client. Every vendor and use-case page on the site is written with this framework as the implicit backdrop, and the FAQ covers the privilege, hallucination, and state-bar status questions in detail.

The practical implication for the site is that no vendor page recommends AI use that supersedes attorney supervision. The taxonomy page is explicit about the distinction between Tier 2 (LLM-assisted, human reviews AI output) and Tier 3 (genuinely autonomous multi-step) deployments. Tier 3 deployments raise additional supervision and audit-trail considerations that the site flags on every vendor that claims Tier 3 capability. International overlays (EU AI Act high-risk classification, UK SRA AI guidance, Law Society of England and Wales) are noted where the vendor's commercial footprint makes them material.

No content on the site constitutes legal advice. Every vendor and use-case page carries an educational-content-not-legal-advice framing in its footer disclaimer. Buyers are directed to consult qualified counsel for matter-specific guidance and qualified procurement or finance advisors for budget decisions.

Refresh cadence

Full-suite re-verification runs quarterly. Every vendor page, every capability cell, every pricing band, and every cross-reference is reviewed against current vendor marketing, recent legal-tech press, and any new analyst coverage or earnings disclosures from the prior quarter.

Cluster-head pages (the vendor profiles and use-case pages with the strongest position in search, currently the Harvey, Ironclad, and Evisort vendor pages and the platforms-compared matrix) are checked monthly for fast-moving signals: pricing-band moves, ownership changes, product-tier renames, agent-tier launches, security-certification updates, and material customer-case-study additions.

The verification date is held in a single constant (LAST_VERIFIED_DATE) in src/lib/schema.ts. Footer text, masthead band, schema dateModified, and every visible "Last verified" label all read from that one source. This is a deliberate design choice so cosmetic-refresh leaks (rolling a date forward without doing the underlying verification work) are structurally prevented.

Corrections process

  1. 01.Email editor@agenticcontractreview.com with the page URL, the specific claim or cell that needs correction, and the source that supersedes the cited reference.
  2. 02.Editor acknowledges the email within five business days. Substantive corrections (pricing-band moves, ownership changes, capability cell flips, verdict adjustments) are queued for the next refresh pass.
  3. 03.The corrected page rolls dateModified forward in the Article schema and adds a short corrigendum note at the bottom describing what changed and when.
  4. 04.Cosmetic corrections (typos, broken links, formatting) are landed silently without a corrigendum note.
  5. 05.Disputed source claims (where the editor's source and the corrector's source disagree) are surfaced as a footnote on the affected cell, citing both, until additional evidence resolves the dispute.

Limitations

Vendor pricing in this category changes faster than the refresh cadence. A pricing-band claim on the site is correct to the verification date; readers buying within the next quarter should expect that vendor quotes have moved at the margin and should verify directly with the vendor sales contact before signing. The bands themselves are stable; the band attribution for each specific vendor is the more volatile signal.

Private vendor financials limit the depth of context the site can offer. Several vendors (Harvey, Robin AI, Luminance) are private companies with no public revenue, churn, or net-retention data. Valuation reporting from legal-tech press is referenced but cannot be independently verified against audited financials. The site treats valuation context as directional only.

UK and US legal market differences mean that some pricing-band attributions are different in different geographies, even for the same vendor. Sterling-denominated pricing is common for UK and EU deployments of UK-headquartered vendors (Robin AI, Luminance) and is noted where material. Buyers in non-US, non-UK, non-EU markets should expect material variation from the site's bands.

AI accuracy claims are inherently task-specific. A vendor that scores above 90 percent on standard NDA extraction may score materially lower on bespoke complex contracts. The site flags this on every page that quotes a vendor accuracy claim, but the underlying point is that buyers should validate AI accuracy against their own contract corpus, not against vendor-published numbers, before production deployment.

ABA Opinion 512 state-by-state implementation is incomplete. Several US state bars have issued AI guidance compatible with Opinion 512; others have not yet. Buyers in jurisdictions where the local state bar has not yet issued guidance should consult counsel before relying on the ABA framework alone.