Understanding Technology-Assisted Review: A Practitioner's Guide
Machine learning has quietly rewritten the rules of document review. Here's what every legal professional needs to know before their next large-scale matter.
Read MoreTAR tools, docs, and glossary support
Use calculators, supporting docs, and glossary terms to understand prevalence, elusion, recall, culling, and related TAR concepts without course-style modules.
Subscribe for new TAR briefings, then browse the latest published posts below.
Free Newsletter
Join the newsletter for fresh articles, practical guidance, and new site updates as they go live.
Practical briefings, new resources, and published posts without platform-specific noise.
Latest Updates Available
Browse the newest published articles and jump into the full archive when you want more.
Machine learning has quietly rewritten the rules of document review. Here's what every legal professional needs to know before their next large-scale matter.
Read MoreJump into the same curated resource categories available in the full library: guides, case law, vendor documentation, research, and practitioner references.
Essential guides and documentation for understanding TAR fundamentals
2 resources →Key legal decisions and judicial guidance on TAR procedures
2 resources →Overview of standard TAR workflows and approaches
2 resources →Essential metrics for measuring TAR effectiveness
1 resource →Essential terminology for TAR practitioners
1 resource →Curated learning path for getting started with TAR
1 resource →Start with the prevalence sample results calculator, then move into the full tools library when you need deeper analysis.
Click a card to reveal its definition. Master these core TAR concepts to communicate more clearly with stakeholders.
The share of documents in a population that are actually responsive.
Why it matters: Low richness changes sample sizes, expectations, and review cost.
A sample-based estimate of what responsive material may remain in the unreviewed set.
Why it matters: It informs stopping decisions but never proves a perfect review.
A fixed reference sample used to compare workflow behavior over time.
Why it matters: Helpful in some workflows, but weak design can mislead.
The share of truly responsive documents that were found by the workflow.
Why it matters: Often the headline metric in defensibility discussions.
The share of documents marked responsive that were actually responsive.
Why it matters: It affects cost and reviewer burden more than completeness.
The uncertainty introduced when a conclusion is based on a sample instead of the full population.
Why it matters: It is the reason confidence intervals exist.