We continuously evaluate state-of-the-art large language models (LLMs) and run the best performers for legal work. This page explains the current engines we use, how we choose them, how often we update, and how we handle data
Note: Model names may change over time as providers release upgrades. LIA Pro’s capabilities and safeguards are maintained or improved with any change.
2) Modes & Token Use
• Single-Engine (Scholarly Insight): best for first-pass research and drafting.
• Single-Engine (Live Analysis): best for real-time validation and forensic-style checks.
• Sequential (Integrated Expertise): drafts, then independently audits and merges results. Uses ~2.5× the tokens versus a single-engine run.
3) How We Choose Models
We run an internal evaluation harness tailored to legal tasks:
• Legal fitness: venue specificity, controlling authority preference, correct citation form.
• Reliability: contradiction detection, stale-authority flags, jurisdiction mismatch detection.
• Safety & privacy: redaction behavior, privilege indicators, and low hallucination rates.
• Operational metrics: latency, stability, cost efficiency at real workloads.
A model is only adopted if it meets or exceeds current accuracy and safety baselines.
4) Update Policy
• When we upgrade: Only when tests show equal or better accuracy, safety, privacy, and cost stability.
• What changes for you: No action needed. You’ll see the same UI, with equal-or-better results.
• Disclosure: Material changes are recorded in the Change Log below.
5) Data Handling Summary
• Lite (inside ChatGPT): uploads allowed; files are processed transiently and not retained by us.
• Pro (our platform): Phase-1 mirrors Lite’s transient handling by default.
• Evidence Vault (optional add-on): encrypted storage with SHA-256 hash + timestamp + actor records (chain-of-custody ledger), role-based access, signed-URL delivery, and exportable Evidence Manifest. Off by default.
Provider training: We configure API usage to disable provider-side training where supported.
Our logs: We hash prompts for telemetry, store minimal metadata (mode, jurisdiction, token counts, flags, confidence), and redact personal data by default.
Erasure: You can request deletion of Pro-side transcripts and uploads (irreversible CoC ledger entries remain as integrity records).
6) Limitations You Should Know
• LIA provides information, not legal advice. Always verify critical conclusions.
• Retrieval is biased toward public/primary sources; subscription databases (e.g., Westlaw/Lexis) are not accessed unless integrated later.
• Chain-of-custody records prove what was handled and when, not whether a file is authentic beyond recorded hashes/timestamps.
7) Change Log (high level)
• 2025-08-17: Confirmed roster: GPT-5 (Engine A) & Grok-4 (Engine B); Sequential token guidance set to ~2.5×.
• 2025-07-22: Upgraded validator engine to latest production; citation-audit flags improved.
• 2025-06-04: Added confidence score + explainable flags to all modes.
8) Contact & Questions
For privacy questions, data requests, or model concerns: support@legalintel.ai
For current pricing and allowances, see /pricing. For a plain-English overview of data handling, see /privacy.