Why AI Agents Need Audit Trails
As AI agents gain autonomy, we lose visibility into what they actually do. That's a problem.
Thoughts on deterministic execution, verifiable computing, and building DEOS.
Why current solutions fall short
Industry shifts and emerging needs
Operators stake ETH. Disputes pay out in ETH. There is no token because there is nothing a token would improve.
Regulators are coming for sequencer ordering. Most L2s have nothing to show. PFO changes that.
Frontier AI models blackmail, steal secrets, and kill when threatened. The fix isn't better prompts. It's better infrastructure.
ERC-8004 defines how AI agents build trust on-chain. But it leaves a critical question unanswered: how do you actually verify what an agent did?
Smart contracts need complex computation. The answer is compute oracles, but only if we can verify them.
Non-deterministic AI systems are black boxes we can't fully audit. That's a safety problem.
A new paradigm is emerging where computation isn't just executed. It's proven.
Real-world applications
What if every financial operation was automatically auditable? Not as an afterthought, but by design.
Current forensics reconstructs what might have happened. Deterministic replay shows what actually happened.
If you could prove exactly what an AI agent did, what would you build?
Concepts explained
Every shared sequencer has a blind spot between transaction arrival and mempool admission. KWA closes it.
A practical introduction to zero-knowledge proofs for software engineers.
A primer on determinism in computing and why it matters.
How we built HCPS on peer-reviewed primitives and what our Lean 4 formalization proves.