Why AI Agents Need Audit Trails
As AI agents gain autonomy, we lose visibility into what they actually do. That's a problem.
Thoughts on deterministic execution, verifiable computing, and building DEOS.
Why current solutions fall short
Industry shifts and emerging needs
Frontier AI models blackmail, steal secrets, and kill when threatened. The fix isn't better prompts — it's better infrastructure.
ERC-8004 defines how AI agents build trust on-chain. But it leaves a critical question unanswered: how do you actually verify what an agent did?
Smart contracts need complex computation. The answer is compute oracles—but only if we can verify them.
Non-deterministic AI systems are black boxes we can't fully audit. That's a safety problem.
A new paradigm is emerging where computation isn't just executed—it's proven.
Real-world applications
What if every financial operation was automatically auditable? Not as an afterthought—by design.
Current forensics reconstructs what might have happened. Deterministic replay shows what actually happened.
If you could prove exactly what an AI agent did, what would you build?
Concepts explained
A practical introduction to zero-knowledge proofs for software engineers.
A primer on determinism in computing and why it matters.
How we built HCPS on peer-reviewed primitives and what our Lean 4 formalization proves.