sequent labs

Verification PoweredSuperintelligence

Formal verification, next gen architecture, deterministic interventions: building next generation AI models to solve any well defined problem.

the problem & insight
the problem

LLMs hallucinate. Retraining costs billions.

Today’s AI produces plausible reasoning that is frequently wrong. In domains where errors are catastrophic — drug discovery, financial modelling, safety-critical systems — “usually right” is not good enough. Laborious manual verification is required to check AI's outputs, hindering progress for exploratory tasks. And every time you want the model to learn something new, you need non-deterministic training. Billions of dollars. Months of compute.

our insight

Formal languages make AI and its reasoning deterministic and continuously learnable

When AI reasons in Lean, a formal language and mathematical compiler checks every step. Errors are not hidden — they are pinpointed to the exact failing step. Pair with novel model architectures that represents knowledge as discrete, addressable concepts, we can use deterministic interventions to fix errors surgically and add new knowledge continuously — no retraining required.

three technical breakthroughs
01

Deterministic Verification

Every reasoning step is expressed in Lean 4 and checked by its kernel — a mathematical compiler that either accepts or rejects. No probability. No ambiguity. Provably correct, with a machine-checkable certificate.

02

Hierachical Architecture

Unlike token-level transformers, our architecture operates on compressed proof-state representations — semantic concepts, not character fragments. This enables strategic reasoning that standard models must learn implicitly, in a fraction of the parameters.

03

Adaptive Continual Learning

When the model fails, verification pinpoints the exact broken step. We don't just edit weights — we add new concept parameters that generalize the fix. The architecture grows: each correction expands what the model can represent. Minutes and pennies, not months and millions.

the verified learning loop
1
ProveModel generates reasoning in Lean formal language and states
2
VerifyLean kernel checks every step — no hallucination survives
3
LocateVerification localises and isolates errors
4
EditAdd or edit parameters — the model grows, no retraining
5
Re-verifyFull verification pass confirms correctness is preserved

No retraining. The architecture grows by adding new parameters that generalize each fix. Continuously, for pennies.

why this is different
 Current AIOur Approach
CorrectnessProbabilistic. "Seems right."Compiler-verified. Provably right.
Error detectionPost-hoc, manual reviewAutomatic, exact step identified
LearningFull retraining ($M+ per cycle)Add parameters (minutes, pennies)
KnowledgeFrozen at training cutoffGrows continuously. Each edit adds new concepts.
DataHuman curated, limitedSelf-generation + verification, unlimited
ReasoningToken-level pattern matchingConcept-level proof planning
Audit trailNone. Black box.Complete. Every step is a checkable proof.
core pillars
Formal verification
specialised language
limited data
gaps in fields
Model interventions
need to identify error
limited by transformers
adverse impacts
Novel architectures
slow iteration
sparse rewards
hallucinations
knowledge exchange / extension
Internal knowledge
— model parameters (fixed)
External knowledge
— formal theorems —
— lean libraries —
Human knowledge
— theorems and proofs —
why now & who we are

Three developments converge today: mature formal math libraries (Lean’s Mathlib covers 100K+ theorems), abundant synthetic proof data (10B+ tokens), and our team’s unique position at the intersection of model interventions, novel architectures, and formal verification.

We are not building another chatbot. We are building verified intelligence infrastructure — the reasoning layer that future AI systems will need when “probably correct” is not acceptable for safety-critical applications and making genuine automated knowledge discoveries.

founders