Commitment Graph

Google indexed the web.
We're indexing reality.

Reviews are fake. Ratings are gamed. Any content-based signal can be manufactured at scale. Behavioral commitment cannot be faked cheaply.


01

The information layer is broken.

AI has made content infinitely cheap. A language model can generate ten thousand five-star reviews in the time it takes a human to write one. It can fabricate analyst opinions, astroturf forums, and manufacture social consensus on demand.

Any signal that can be expressed in words can be faked. The question is: what cannot?


PageRank worked because hyperlinks were costly acts — a website owner putting their reputation behind another page was a meaningful signal. That signal was hard to fake in 1998. In 2026, it is easy.

But there is a category of human action that remains structurally hard to fake: commitment. A person who visits the same restaurant twelve times in thirty days. A company with twelve years of profitable operation. A customer who has purchased from the same supplier across three different economic cycles.

These are behavioral signals rooted in real cost — time, money, attention, reputation on the line. No language model can manufacture them at scale without bearing the actual cost.

"When content becomes free, commitment becomes scarce. The commitment layer is what remains hard to fake."

Commit captures, aggregates, and surfaces these signals — so AI recommendations, search results, and trust scores are grounded in reality, not manufactured consensus.

Think of it as the trust layer that should have been built alongside the information layer — but wasn't, because we didn't need it until now.


I — Behavioral data
Revealed preferences over stated opinions.

What people actually do — repeat visits, sustained purchases, long-term engagement — is structurally harder to fake than any review, rating, or recommendation. Skin in the game is the only unfakeable signal.

II — Privacy-preserving
Commitment without surveillance.

Zero-knowledge proofs and anonymous credential systems mean the signal is verifiable without exposing who produced it. A person can prove they have visited a restaurant twenty times without revealing their identity or location history.

III — AI-native
Trust signals as queryable infrastructure.

AI agents and recommendation systems get a simple API: "how many real humans committed to this, and how deeply?" Instead of scraped reviews and gamed ratings, they get behavioral ground truth.


"Skin in the game is the only unfakeable signal."

Reputation can be manufactured. Reviews can be bought. But repeat purchases, staked capital, and sustained behavioral patterns require real cost.

Any system that substitutes opinion for commitment will be gamed. We're building the alternative.

Read the full essay →


Stay in the loop

Early access, research updates, and the occasional strong opinion.