AI has made content free. Reviews are fabricated, recommendations hallucinated, ratings gamed at machine speed. What remains hard to fake: behavioral commitment — time, money, sustained action. That signal is what we're building.
The problem
PageRank solved 1996 with hyperlinks — costly acts that were hard to fake at scale. In 2026, links are easy. AI generates content, reviews, and citations at zero marginal cost. The information layer has collapsed into self-referential noise.
But there is a category of signal that remains structurally hard to fake: commitment. A repeat purchase. A decade of profitable operation. A customer who returns after a price increase. These acts require real cost — time, money, skin in the game. No AI can manufacture them at scale without bearing that cost.
The thesis
PageRank worked because hyperlinks were costly acts — a website owner putting their reputation behind another page was a meaningful signal. That signal was hard to fake in 1998. In 2026, it is easy.
But there is a category of human action that remains structurally hard to fake: commitment. A person who visits the same restaurant twelve times in thirty days. A company with twelve years of profitable operation. A customer who has purchased from the same supplier across three different economic cycles.
These are behavioral signals rooted in real cost — time, money, attention, reputation on the line. No language model can manufacture them at scale without bearing the actual cost.
“When content becomes free, commitment becomes scarce. The commitment layer is what remains hard to fake.”
Commit captures, aggregates, and surfaces these signals — so AI recommendations, search results, and trust scores are grounded in reality, not manufactured consensus.
Think of it as the trust layer that should have been built alongside the information layer — but wasn't, because we didn't need it until now.
Three curves converged in early 2026. AI search is wrong about local businesses a third of the time — the trust problem is acute. Zero-knowledge proofs hit production (zkTLS: 3M verifications, zero fraud). And proof of personhood reached scale (World ID: 18M verified humans; eIDAS 2.0 mandates wallets for 450M Europeans by year-end).
Each component existed in isolation. The integration — behavioral proofs from verified humans, consumed by AI systems — is what nobody has built.
What we're building
AI agents and recommendation systems query a simple API: how many real humans committed to this, and how deeply? Instead of scraped reviews and gamed ratings, they get behavioral signals rooted in real cost — time, money, sustained engagement.
When ChatGPT, Perplexity, or Claude recommends a business, the extension surfaces what's real: years of operation, financial health, food safety scores — verified from public records and behavioral data. Catches bankrupt restaurants still being recommended. Useful from install one.
A privacy-preserving protocol for contributing behavioral commitments anonymously. Zero-knowledge proofs let anyone prove they committed — without revealing who they are. The foundation for trust infrastructure that can't be gamed.
Reputation can be manufactured. Reviews can be bought. But repeat purchases, staked capital, and sustained behavioral patterns require real cost.
Any system that substitutes opinion for commitment will be gamed. We're building the alternative.
How it works
There is no "bad data" — only fake data from fake people. BankID (4.6M Norwegians), World ID (18M+ globally), and eIDAS 2.0 (450M Europeans by year-end) provide the sybil-resistant identity layer. Every signal in the graph comes from a verified, unique human.
zkTLS lets anyone prove behavioral claims about themselves — "I've visited this restaurant five times in six months" — without revealing identity, transaction details, or any other data. 3M+ verifications, zero fraud. Privacy is not a tradeoff.
Earn by contributing verified behavioral data. Pay to query the network's collective intelligence. Stake on recommendations — if others commit to the same thing, you earn; if they don't, you lose. Resolution is behavioral data, not opinion. The game cannot be rigged.
Why Norway first
Norway has something no other country has: freely accessible, structured commitment data already in the public domain. Brønnøysundregistrene publishes full financial statements for every Norwegian company. Mattilsynet publishes food safety inspection results for every restaurant. PSD2 mandates open banking APIs across 3,500+ European banks.
The gap is not data access — it's assembly. The foundation layer is live today. The behavioral layer is what we're building.
For developers
Add to Claude Desktop, Cursor, or any MCP-compatible tool. No npm install, no API key.
Then ask your AI to audit your dependencies for supply chain risk — or check the behavioral commitment score of any GitHub repo, npm package, or PyPI project.
{
"mcpServers": {
"proof-of-commitment": {
"type": "streamable-http",
"url": "https://poc-backend.amdal-dev.workers.dev/mcp"
}
}
} Listed in the official MCP registry. Source on GitHub.
Foundation layer
Registry data — years of operation, financial health, regulatory status — is the verifiable foundation. It tells you a business has skin in the game: capital committed, years survived, filings maintained.
The full picture requires behavioral data: repeat customers, return rates, sustained engagement. That layer is what we're building. The extension shows where it starts. The protocol defines where it ends.