In November 2024, prediction markets called the election before anyone else. While polls showed a tight race and experts hedged their statements, the market gave Trump a 60% chance of winning. When the results came in, prediction markets outperformed the entire forecasting establishment—polls, models, and expert opinions alike.
This proves that markets can aggregate dispersed information into accurate beliefs, with risk-sharing mechanisms playing a pivotal role. Since the 1940s, economists have envisioned speculative markets surpassing expert predictions. Now, this vision has been validated on the world’s largest stage.
Let’s examine the economic principles at play.
On platforms like Polymarket and Kalshi, bettors provided billions in liquidity. What did they get in return? They generated signals instantly accessible to the entire world for free. Hedge funds watched closely, campaign teams absorbed the data, and journalists built dashboards around it. No one paid for this intelligence; in effect, bettors subsidized a global public good.
This is the core dilemma for prediction markets: their most valuable product—information—is released the moment it’s created. Sophisticated buyers won’t pay for public information. Private data providers can charge hedge funds premium fees because their data is unavailable to competitors. By contrast, public prediction market prices, no matter how accurate, hold no value for these buyers.
As a result, prediction markets can only thrive in areas where enough people want to “gamble”—elections, sports, internet meme events. What we get is entertainment disguised as information infrastructure. Critical questions for decision-makers—geopolitical risks, supply chain disruptions, regulatory outcomes, technology development timelines—go unanswered because no one bets on them for fun.
The economic logic of prediction markets is upside down. Fixing this is part of a much broader transformation. Here, information itself becomes the product, with betting merely a mechanism to produce it—and a limited one at that. We need a new paradigm. The following is an initial outline of “cognitive finance”: infrastructure redesigned from first principles, centered on information itself.
Financial markets are a form of collective intelligence. They aggregate scattered knowledge, beliefs, and intentions into prices, coordinating millions of participants who never communicate directly. This is extraordinary, yet also highly inefficient.
Traditional markets move slowly due to trading hours, settlement cycles, and institutional friction. They express beliefs only in broad terms through price—a blunt instrument. The range of things they can represent is also limited: just the space of tradable claims, which is trivial compared to the full spectrum of human concerns. Participation is tightly restricted: regulatory barriers, capital requirements, and geography exclude most people and all machines.
The crypto world is changing this, introducing 24/7 markets, permissionless participation, and programmable assets. Modular protocols can be combined without central coordination. DeFi (Decentralized Finance) has shown that financial infrastructure can be rebuilt as open, interoperable building blocks, created through autonomous module interaction rather than gatekeeper mandates.
Yet DeFi mostly replicates traditional finance with better “pipes.” Its collective intelligence still revolves around price, focuses on assets, and absorbs new information slowly.
Cognitive finance is the next step: rebuilding intelligent systems from the ground up for the AI and crypto era. We need markets that “think”—maintaining probabilistic models of the world, absorbing information at any level of detail, accessible and updateable by AI systems, and allowing humans to contribute knowledge without understanding the underlying structure.
The components are already within reach: private markets to fix the economic model, compositional structures to capture correlations, agent ecosystems to scale information processing, and human-computer interfaces to extract signals from the human mind. Each part can be built today, and when combined, they’ll create something fundamentally new.
When prices remain private, economic constraints disappear.
A private prediction market reveals prices only to entities subsidizing liquidity. These entities receive exclusive signals—proprietary intelligence, not a public good. Suddenly, the market works for any question where “someone needs an answer,” regardless of entertainment value.
I discussed this concept with @Dave_White.
Picture a macro hedge fund seeking continuous probability estimates for Fed decisions, inflation, and employment data—not as betting opportunities, but as decision signals. If the intelligence is exclusive, they’ll pay for it. A defense contractor wants probability distributions for geopolitical scenarios; a pharmaceutical company wants forecasts for regulatory approval timelines. Today, these buyers don’t exist because, once generated, information is immediately leaked to competitors.
Privacy is essential for a viable economic model. When prices are public, information buyers lose their edge, competitors free-ride, and the whole system reverts to entertainment demand.
Trusted execution environments (TEEs) enable this—a secure computing enclave where operations are invisible to outsiders, even system operators. The market state exists entirely within the TEE. Information buyers receive signals through verified channels. Multiple non-competing entities can subscribe to overlapping markets; tiered access windows help balance exclusivity and broader distribution.
TEEs aren’t perfect—they require trust in hardware manufacturers. But they already provide enough privacy for commercial use, and the engineering is mature.
Current prediction markets treat events as isolated. “Will the Fed cut rates in March?” is one market. “Will Q2 inflation exceed 3%?” is another. Traders who understand these events are related—knowing, for example, that high inflation may increase the odds of a rate cut or strong employment may reduce them—must manually arbitrage across unconnected pools, trying to reconstruct correlations that the market structure itself destroys.
It’s like building a brain where each neuron fires in isolation.
Compositional prediction markets are different: they maintain a joint probability distribution over combinations of outcomes. A trade like “rates stay high and inflation exceeds 3%” ripples through all related markets, synchronously updating the entire probability structure.
This is similar to how neural networks learn: each training update adjusts billions of parameters at once, and the whole network responds to every data point. Likewise, every trade in a compositional prediction market updates the entire probability distribution, with information spreading through correlation structures, not just isolated prices.
The result is a “model”—a probability distribution over the state space of world events, continuously updated. Each trade optimizes the model’s understanding of relationships between variables. The market learns how the real world connects.
Automated trading systems now dominate Polymarket. They monitor prices, spot mispricings, execute arbitrage, and aggregate external information at speeds no human can match.
Current prediction markets are designed for human bettors using web interfaces. Agents participate only awkwardly in this setup. An AI-native prediction market would flip this logic: agents become the main participants, with humans serving as information sources.
This demands a crucial architectural choice: strict separation. Agents who can see prices must never be information sources; agents who gather information must never see prices.
Without this “wall,” the system cannibalizes itself. An agent able to both access information and observe prices could infer which information is valuable from price movements and then seek it out. The market’s signals become a treasure map for others. Information gathering devolves into complex front-running. The separation ensures information-gathering agents can profit only by providing truly novel, unique signals.
On one side of the “wall” are trading agents, competing in complex compositional structures to spot mispricings, and evaluation agents, who use adversarial mechanisms to assess incoming information and distinguish signal from noise or manipulation.
On the other side are information-gathering agents, operating entirely outside the core system. They monitor data streams, scan documents, and connect with people who possess unique knowledge—feeding information one-way into the market. When their information proves valuable, they get paid.
Compensation flows backward along the chain. A profitable trade rewards the trading agent, the evaluation agent, and the original information-gathering agent. The ecosystem becomes a platform: highly specialized AI agents can monetize their skills, while the platform also serves as a foundation for other AI systems to collect intelligence to guide their actions. The agents are the market itself.
Much of the world’s most valuable information exists only in human minds: an engineer who knows their product is behind schedule; an analyst who detects subtle shifts in consumer behavior; an observer who notices details invisible even to satellites.
An AI-native system must capture these insights from the human brain without being drowned in noise. Two mechanisms make this possible:
Agent-mediated participation: let humans “trade” without seeing prices. A person simply states their belief in natural language, such as “I think the product launch will be delayed.” A dedicated belief translation agent parses the prediction, assesses confidence, and translates it into a market position. This agent coordinates with systems that have price access to construct and execute the order. The human gets only basic feedback—“position established” or “insufficient edge.” Payouts are settled after the event based on prediction accuracy, with no price information ever disclosed.
Information markets: let information-gathering agents pay humans directly for their insights. For example, an agent seeking information on a tech company’s earnings can identify an engineer with inside knowledge, purchase an assessment, and validate and pay for it based on subsequent market value. Humans are compensated for their knowledge without needing to understand complex market structures.
Take analyst Alice as an example: she believes, based on her expertise, that a certain merger won’t get regulatory approval. She enters this view via a natural language interface; her belief translation agent parses the prediction, gauges her confidence from language, checks her track record, and constructs a position—never seeing prices. A coordination agent at the TEE boundary evaluates whether her view has an informational edge based on current market probabilities and executes the trade. Alice only receives “position established” or “insufficient edge” notifications. Prices remain confidential throughout.
This approach treats human attention as a scarce resource that must be carefully allocated and fairly compensated, not something to be freely exploited. As these interfaces mature, human knowledge will become “liquid”: what you know flows into a global reality model and is rewarded if proven correct. Information trapped in minds will no longer remain trapped.
Zoom out far enough, and you can see where this is headed.
The future will be an ocean of fluid, modular, interoperable relationships. These relationships will form and dissolve spontaneously between human and non-human participants, with no central gatekeepers. This is “fractal autonomous trust.”
Agents negotiate with agents, humans contribute knowledge via natural interfaces, and information continually flows into a perpetually updated reality model—open to all, controlled by none.
Today’s prediction markets are just a rough sketch of this vision. They prove the core concept (risk-sharing produces accurate beliefs) but remain stuck in the wrong economic model and structural assumptions. Sports betting and election wagering are to cognitive finance what ARPANET was to today’s global internet: a proof of concept mistaken for the final form.
The real “market” is every decision made under uncertainty—nearly every decision. Supply chain management, clinical trials, infrastructure planning, geopolitical strategy, resource allocation, personnel appointments… The value of reducing uncertainty in these fields far exceeds the entertainment value of sports betting. We just haven’t built the infrastructure to capture this value yet.
The coming shift is the “OpenAI moment” for cognition: a civilization-scale infrastructure project, not for individual reasoning but for collective belief. Large language model companies are building systems that “reason” from past data; cognitive finance aims to build systems that “believe”—maintaining calibrated probability distributions about the world, continuously updated via economic incentives, integrating human knowledge at any level of detail. LLMs encode the past; prediction markets aggregate beliefs about the future. Only together can they form a more complete cognitive system.
At full scale, this will become infrastructure: AI systems can query it to understand uncertainty; humans can contribute knowledge without understanding its inner workings; it can absorb local knowledge from sensors, domain experts, and cutting-edge research, synthesizing it into a unified model. A self-optimizing, predictive world model. A substrate where uncertainty itself can be traded and composed. The emergent intelligence will ultimately exceed the sum of its parts.
The “civilizational computer”—that’s the direction cognitive finance is building toward.
All the pieces are in place: agent capabilities have surpassed the threshold for prediction; confidential computing has moved from the lab to production; prediction markets have demonstrated large-scale product-market fit in entertainment. These threads converge on a historic opportunity: to build the cognitive infrastructure the AI era requires.
The alternative is that prediction markets remain forever entertainment—highly accurate during elections, ignored otherwise, never addressing truly important questions. In that world, the infrastructure AI needs to understand uncertainty will never exist, and valuable signals locked in human minds will remain silent.





