In 2024, 'AI-powered due diligence' became a feature claim attached to nearly every product in the deal technology space. VDR vendors added document summarization. Advisory firms built GPT-wrapper tools for contract review. Point solutions emerged for every subproblem in the diligence workflow. Two years later, some of these tools are genuinely useful. Others are demos that don't survive contact with real deal complexity.
This is a practitioner's assessment of what AI is actually delivering in M&A due diligence in 2026, based on the use cases where the technology has proven reliable and the ones where it still produces too many false positives or misses to trust at deal speed.
Where AI genuinely works in M&A diligence
Document classification and initial triage
The first genuinely reliable AI use case in due diligence is document classification. Given a data room with 600 mixed documents, a well-trained classifier can sort them into financial, legal, operational, and HR categories with 90%+ accuracy in under a minute. This is table-stakes functionality now — most enterprise VDRs offer it. The value is real but the competitive moat is gone.
Contract clause extraction
Extracting specific clause types from contracts — change of control provisions, termination for convenience, limitation of liability caps, IP assignment language — is a use case where LLMs have delivered genuine productivity improvements. Legal teams that previously spent 15 minutes per contract on preliminary review can now get a structured clause summary in seconds and spend their time on the clauses that are actually non-standard.
The reliability threshold here matters. AI-assisted clause extraction with attorney review is a workflow improvement. AI clause extraction without review in a deal where contract terms affect pricing is a risk. The firms using this technology effectively are using it to screen, not to conclude.
Semantic matching in reconciliation
The L4 semantic matching layer in document reconciliation is a genuine AI application that solves a real problem: two records that refer to the same transaction but use different terminology. 'Microsoft Azure Services' in the ledger versus 'Azure cloud hosting infrastructure' in an invoice. 'Deloitte Advisory' versus 'DTT Consulting Group.' These are matches that exact-match and fuzzy-match algorithms miss and that semantic ML catches reliably.
The key design principle: AI should only engage for the cases that deterministic methods can't resolve. A system that leads with exact matching and only escalates to semantic ML for the ambiguous 10–15% of cases produces far fewer false positives than a system that applies ML to everything.
Where AI is still overhyped
Financial fraud detection
Despite vendor claims, AI-powered fraud detection in M&A diligence is not a solved problem. The base rate of actual fraud in M&A data rooms is low enough that even a 95% accurate classifier produces more false positives than true positives. Real fraud cases in due diligence are typically caught by pattern recognition from experienced analysts — the round-number clustering, the year-end reversals, the counterparty name anomalies described elsewhere — not by machine learning on financial time series.
End-to-end automated legal review
Several products have tried to automate the full legal review workflow — not just clause extraction, but risk rating, negotiation position recommendation, and issue flagging for deal counsel. The products demo well. In production, on a live deal where every flagged issue triggers attorney review time that costs $600+/hour, the false positive rate of these systems is too high to use without significant additional screening. This use case will likely mature — but it isn't production-ready for high-stakes M&A in 2026.
AI-generated QoE summaries
Some firms have experimented with using LLMs to synthesize QoE findings into draft reports. The output is coherent prose that captures the right structure. It's also not reliable enough to use without full analyst review — which largely eliminates the time savings. The bottleneck in QoE report writing is not drafting prose; it's the analytical judgment behind the adjustments. Automating the prose doesn't address the bottleneck.
The right framing for AI in due diligence
The most useful framing for AI in M&A diligence is not 'which workflow can AI replace?' but 'which parts of the workflow are currently constrained by human capacity limits?' Document classification is fast with AI and slow without it. Clause extraction across 200 contracts is fast with AI and slow without it. Full-ledger reconciliation is fast with the right system and genuinely impossible manually at scale.
AI tools that expand analyst capacity — letting one analyst do the work that previously required three — are delivering real value. AI tools that claim to replace analyst judgment on material findings are where the hype is still running ahead of the reliability.
The best AI-augmented diligence processes in 2026 use technology to eliminate the mechanical work and preserve the analytical judgment. That's not a revolutionary statement. But it's what actually works.