Verifiable evidence surface

Core publication

Insurable AI

Abstract

Core claim

Insurable AI sets out the conditions required for AI systems to be underwritable in the real world. Insurance does not price ideals. It prices controls, evidence, and repeatable performance under stress. This publication describes five criteria that move AI from a trust based posture to an audit based posture, including governance embedded design, enforceable control points, and verifiable decision records. It connects these criteria to practical deployment questions that insurers, auditors, and risk owners care about. What happens when the model is pressured, when policies conflict, when users attempt bypass, and when failure must be proven rather than asserted. The central claim is that insurability requires deterministic infrastructure around the model, including pre inference enforcement and forensic grade logging. With the right architecture, systems can produce admissible artefacts, demonstrate consistent behaviour, and support independent verification. Without that, safety remains probabilistic and accountability remains unclear. The outcome is a pragmatic framework for building AI systems that can be governed, audited, and insured.

Key findings

Main points

Insurability requires controls and evidence

Insurability requires controls and evidence, not assurances.

Governance must be embedded in architecture

Governance must be embedded in architecture, not added as a wrapper.

Deterministic enforcement improves predictability

Deterministic enforcement reduces variance and improves predictability.

Audit grade artefacts must be independently verifiable

Audit grade artefacts must be independently verifiable.

Risk evaluation improves when failure modes are testable

Risk evaluation improves when failure modes are measurable and testable.

Canonical file

The PDF version is the canonical downloadable file for archival and printing.