Structural Design Labs

We build verifiable governance tools for high-stakes AI.

Focused on deterministic pre-inference gating.

Produce cryptographically signed, offline-verifiable audit artefacts.

Open-source core (MIT) with public proofs on GitHub.

What We Do

Research & Development

We develop deterministic governance frameworks that move AI safety from training-time to pre-inference, enabling real-time policy enforcement with deterministic verifiable evidence.

Open-Source Tools

Our flagship tool, SIR (Signal Integrity Resolver), is open-source under MIT licence. We believe verifiable governance must be transparent and auditable by anyone.

Industry Collaboration

We work with organisations deploying high-stakes AI systems to implement governance that meets regulatory requirements and enables insurance underwriting.

Why This Matters

The Governance Gap

Traditional AI safety focuses on training-time alignment, but models can still produce harmful outputs in production. Post-deployment monitoring catches problems too late. We need governance at pre-inference.

SIR sits between the user and the model, enforcing explicit rules before inference happens. Every interaction produces a cryptographically signed audit trail that can be verified offline.

Making AI Insurable

Insurance companies need deterministic controls and auditable evidence to underwrite AI systems. Probabilistic safety measures aren't enough. Our approach provides the deterministic enforcement with verifiable evidence and signed audit artefacts that make high-stakes AI deployments insurable.