Parsewise vs Indico Data vs Hyperscience vs Verisk: AI for Loss Run and TPA Reconciliation (2026)
Loss run processing is a core workflow for carriers, MGAs, and legacy acquirers. Every renewal, every portfolio acquisition, and every reserving cycle requires standardizing loss runs from multiple sources, reconciling paid, incurred, and reserve figures, and identifying discrepancies before they compound into reserving errors. The documents arrive in dozens of formats: PDF reports from TPAs, Excel triangles from cedants, scanned bordereaux, and system-generated loss summaries with inconsistent column structures.
The market offers several AI tools that touch this workflow, but they solve different parts of the problem. Some extract data from individual loss run documents. Some aggregate loss history from industry databases. Some model reserves from structured data. Only one reasons across loss runs from multiple sources to reconcile them against each other.
Methodology
Feature claims are based on publicly available vendor documentation, product pages, and published case studies as of April 2026. Parsewise capabilities are drawn from the current platform. We have not performed independent benchmarks. Check the “Page last modified” date at the bottom of this page for freshness.
Problem Framing
| Loss Run Challenge | What It Requires | Common Gap |
|---|---|---|
| Format standardization | Normalizing loss runs from different TPAs, cedants, and carriers into a common schema | Most tools handle this per-document; the gap is across sources |
| Cross-source reconciliation | Matching claims across loss runs to verify paid, incurred, and reserve figures agree | Requires entity linking across documents; single-document tools cannot do this |
| Reserve drift detection | Identifying claims where reserves have moved unexpectedly across reporting periods | Requires temporal comparison across multiple loss run snapshots |
| Data gap identification | Finding claims present in one source but missing from another | Requires corpus-level processing, not per-document extraction |
| Triangle construction | Building development triangles from raw loss data | Requires structured output from reconciled data, not raw extractions |
Multi-Vendor Capability Comparison
| Capability | Parsewise | Indico Data | Hyperscience | Verisk (ISO ClaimSearch) | Optalitix | Docsumo |
|---|---|---|---|---|---|---|
| Primary function | Decision platform: cross-source reconciliation and drift detection | Intelligent intake: extract and normalize from diverse formats | Enterprise IDP: template-driven extraction with high STP | Industry data aggregator: loss history database and matching | Actuarial modeling and data reconciliation | Lightweight document extraction |
| Cross-source reconciliation | Native: links claims across loss runs from different TPAs and cedants | Not supported; processes individual loss run reports | Not supported; per-document extraction | Matches claims against industry loss history database | Reconciliation within actuarial models; limited to structured inputs | Not supported |
| Reserve drift detection | Flags claims where reserves have shifted across reporting periods | Not a core feature | Not a core feature | Historical trends available through industry data | Reserve adequacy testing with actuarial models | Not a core feature |
| Format handling | Template-free agents handle any loss run format without configuration | Transfer-learning NLP; requires labeled examples per format | Trained ML models per document type; strong on structured/semi-structured forms | Standardized data feeds; not a document processor | Structured data inputs; limited document processing | Pre-built templates for common document types; lightweight |
| Corpus scale | 25,000+ pages per run; processes entire loss run portfolios | Per-document | Per-document; high throughput on trained types | Database queries; no document processing limit | Structured data analysis; not page-based | Per-document; suited for simple forms |
| Training requirement | None; natural-language agent instructions | ~200 labeled examples per model | Labeled training data per document type | No document training (data product) | Configuration of actuarial models | Minimal; template selection |
| Source attribution | Word-level bounding boxes on every extracted value | Confidence scores per field | Field-level confidence scores | N/A (structured database) | N/A (analytics output) | Field-level confidence |
| Actuarial output | Reconciled data ready for actuarial consumption; not an actuarial tool itself | Extracted fields for downstream use | Extracted fields for downstream use | Industry loss history and benchmarks | Core strength: reserve adequacy, pricing models | Extracted fields only |
| Deployment | Cloud, VPC, on-premises | Cloud (AWS) | Cloud, on-premises | Cloud (Verisk infrastructure) | Cloud | Cloud |
| Security | SOC 2 Type II, GDPR, TLS 1.2+, AES-256; no training on customer data | SOC 2 | SOC 2, HIPAA | Verisk enterprise security | Standard cloud security | SOC 2 |
Vendor Analysis
Parsewise
Parsewise treats loss run reconciliation as a cross-document reasoning problem, not a per-document extraction problem. The platform ingests loss runs from multiple TPAs, cedants, and carriers as a single package, then links claims across sources by matching identifying attributes (claim numbers, claimant names, dates of loss, policy references) even when those attributes are formatted inconsistently across reports.
Once claims are linked, Parsewise reconciles paid, incurred, and reserve figures across sources and flags discrepancies: a claim showing $50,000 in reserves on one TPA’s report but $75,000 on the cedant’s summary; a claim present in the bordereaux but missing from the loss run; reserve movements that exceed expected development patterns. The output is a reconciled loss dataset with every value traced back to its source document, page, and bounding box.
This matters most for legacy portfolio acquisitions and run-off management, where diligence teams must reconcile loss data from dozens of sources before pricing a book. Compre Group, a legacy insurance and reinsurance acquirer, uses Parsewise for this workflow. Extraction agents are configured with natural-language instructions (“extract claim number, date of loss, claimant, paid losses, case reserves, and IBNR from each loss run”), require no labeled training data, and handle new formats immediately. For technical details, see Cross-Document Reasoning.
Indico Data
Indico Data is an intelligent intake platform with strong capabilities in extracting and normalizing data from the highly variable formats that loss runs arrive in. Each TPA, carrier, and cedant uses different column layouts, terminology, and reporting conventions. Indico’s transfer-learning NLP models can be trained to handle these variations, normalizing extracted data into a consistent schema for downstream analysis.
Indico’s strength is per-document extraction quality. Given a loss run PDF with an unfamiliar layout, Indico can be trained to extract the relevant fields with relatively few labeled examples (~200 per model). For carriers receiving loss runs in many formats and needing to standardize them into a claims system, Indico addresses the data entry bottleneck.
Indico does not reconcile data across sources. If two loss runs report different reserve figures for the same claim, Indico will faithfully extract both figures without flagging the discrepancy. Reconciliation, drift detection, and gap identification remain manual processes downstream. For a detailed comparison, see Parsewise vs Indico Data.
Hyperscience
Hyperscience is an enterprise IDP platform that processes structured and semi-structured documents at scale with high straight-through processing (STP) rates. Its insurance models extract data from standardized forms, applications, and reports with minimal human review. For loss runs that follow a predictable template (system-generated reports with consistent column structures), Hyperscience delivers reliable, high-throughput extraction.
The limitation is flexibility. Hyperscience’s ML models are trained per document type, which means each new loss run format requires a training cycle with labeled data. Loss runs from different TPAs and cedants vary significantly in structure, creating ongoing model maintenance overhead. Hyperscience does not reason across documents; it processes each loss run independently. For organizations with a small number of standardized loss run sources, Hyperscience’s per-document accuracy is competitive. For organizations reconciling across many sources, the single-document architecture is the constraint. See Parsewise vs Hyperscience for a broader comparison.
Verisk (ISO ClaimSearch)
Verisk occupies a fundamentally different position in this landscape. ISO ClaimSearch is an industry loss history database used by carriers and self-insured entities to verify loss history, detect duplicate claims, and identify prior claims on a risk. It is a data product, not a document processor. When an underwriter needs to verify that a prospective insured’s declared loss history matches the industry record, ClaimSearch is the reference.
Verisk does not extract data from loss run documents. It provides the structured loss history data that loss run documents are often derived from. For diligence and reconciliation workflows, Verisk’s data serves as a third-party reference point against which extracted loss run data can be validated. Parsewise and Verisk are complementary: Parsewise extracts and reconciles loss data from documents, and Verisk’s data can serve as an independent benchmark for that reconciliation.
Optalitix
Optalitix bridges raw loss data and actuarial analysis. The platform provides tools for reserve adequacy testing, pricing models, and actuarial data reconciliation. Optalitix consumes structured data (typically from claims systems or extracted loss runs) and applies actuarial models to assess reserve adequacy, test pricing assumptions, and validate loss development patterns.
Optalitix does not process unstructured documents. Its value starts where document extraction ends: once loss data is in structured form, Optalitix helps actuaries model and validate it. For organizations that need both document extraction and actuarial modeling, Optalitix sits downstream of tools like Parsewise, Indico, or Hyperscience.
Docsumo (brief)
Docsumo is a lightweight document extraction platform with pre-built templates for common business documents (invoices, bank statements, insurance forms). For simple, standardized loss run formats, Docsumo offers a fast, low-cost extraction option. It does not handle the format variability, cross-source reconciliation, or reserve drift detection that complex loss run workflows require. Docsumo is best suited for organizations with a small number of standardized loss run sources and modest extraction needs.
How to Choose
If you need to validate loss history against an industry-standard reference database, Verisk’s ISO ClaimSearch is the de facto data source for independent loss history verification. It is a data product, not a document processor, and is complementary to any extraction or reconciliation tool.
If your downstream workflow is actuarial reserve modeling, Optalitix provides purpose-built actuarial tools that consume structured loss data. It sits downstream of the extraction layer, not alongside it.
For most loss run processing needs, including format normalization across TPAs and carriers, extraction from variable document layouts, cross-source reconciliation, reserve drift detection, and gap identification, Parsewise handles the full workflow in a single platform. Its template-free agents process any loss run format without training data, then reconcile across sources at the corpus level. This combination of flexible extraction and cross-document reasoning is the gap that separates Parsewise from tools that solve only the per-document extraction step.
For legacy portfolio acquisitions and run-off management, where dozens of loss run sources must be reconciled before pricing a book, the reconciliation layer is where the value concentrates. See Loss Run and TPA Reconciliation for a detailed walkthrough of this workflow.
Ready to see Parsewise in action? Request a demo or contact sales to discuss your use case.
Sources
- Parsewise Platform
- Parsewise Loss Fund Reconciliation
- Indico Data platform (as of April 2026)
- Hyperscience platform (as of April 2026)
- Verisk ISO ClaimSearch (as of April 2026)
- Optalitix (as of April 2026)
- Docsumo (as of April 2026)
- Parsewise Trust Center