■ Stealth mode — Selective conversations welcome ■
Stranded power.
Open weight models.
The infrastructure for
what comes next.
The world generates vast quantities of electricity that never find a market. Overbuilt hydroelectric systems. Curtailed solar and wind. Stranded gas at remote production sites. Industrial generators running idle. Electrons produced, destroyed, wasted — because the cost of transmitting them to demand exceeds their value at the destination.
Simultaneously, the global economy is building an infrastructure of autonomous AI agents that will generate inference demand with no historical precedent. Agents that plan, act, and call other agents. Agent hierarchies decomposing tasks across thousands of model queries. A single workflow triggering hundreds of inference requests, none of them requiring human-speed response times — all of them requiring cheap, reliable compute at scale.
Cathedral is the infrastructure that connects these two realities. We deploy modular, containerised AI inference data centres at the source of stranded power — no transmission infrastructure, no grid dependency, no multi-year construction cycle. We sell the resulting compute capacity to frontier AI laboratories, enterprise AI platforms, and the expanding agentic ecosystem at a structural price advantage that no grid-connected competitor can replicate.
The model is open. The compute is scarce. We own the compute — at near-zero fuel cost.
This is not a niche play. The AI inference market is projected to grow from $106 billion in 2025 to $255 billion by 2030. Cathedral is being built to serve the infrastructure layer of that growth — from the bottom up, at structural cost, at global scale.
The first generation of AI was a human typing a prompt. That era is closing. The next generation is fully autonomous agent hierarchies — orchestrators that plan and delegate, sub-agents that execute and call specialist models, model layers that process and return. A single user intent may propagate through fifty or five hundred inference requests before resolution.
Frontier labs are explicitly designing for this future. Enterprise software platforms will embed agents into every workflow by default. The token volume generated by agent-to-agent interaction will dwarf anything generated by human-initiated queries — and it will be structurally non-sensitive to latency in ways that consumer applications are not.
An orchestrator agent composing a research report does not need a 30ms response. It needs sustained throughput at minimum cost per token — exactly what Cathedral's stranded-power infrastructure provides, from locations that would never qualify as prime real estate for consumer-facing AI.
This is Cathedral's primary market. Not the human at the keyboard. The agent at the API.
Cathedral is being built as a new category of infrastructure company — an inference neocloud. Not a hyperscaler. Not a colocation facility. Not a GPU rental marketplace. A dedicated, open-weight AI inference platform built on stranded power, designed from first principles for the economics of agentic token distribution.
Cathedral will deploy the leading open-weight frontier models — DeepSeek V4, Llama 4, GLM-5, Qwen 3.5, MiniMax M2.7 — via OpenAI-compatible APIs. No proprietary model dependencies. No per-token royalties to model owners. The open-weight revolution has made frontier-class intelligence freely deployable. Cathedral will be the infrastructure that runs it at structural cost.
Frontier AI laboratories — OpenAI, Anthropic, Google DeepMind — will need distributed, overflow inference capacity as they scale globally. Cathedral is being designed as the structural partner for that demand: architecturally neutral, priced by the token, deployable rapidly into geographies and markets that traditional DC infrastructure cannot reach economically.
Agentic inference is not a latency-sensitive workload. Agent-to-agent calls, background reasoning, multi-step planning, batch synthesis — none of these require the sub-100ms response times that consumer applications demand. Cathedral's sites may be remote. That is structurally irrelevant. We price on tokens, not milliseconds. The market we serve is built exactly for that.
Cathedral will deploy containerised compute infrastructure at the source of stranded power in weeks — not months, not years. No data centre construction programme. No civil engineering at scale. A site agreement, pad preparation, and container placement. The system arrives fully integrated: power conditioning, networking, compute. Redeployable when the asset moves.
Cathedral does not have a preferred fuel. It has a preferred economics: electricity with no buyer at the point of generation. Any stranded source qualifies. The compute infrastructure is identical. The margin structure is the same.
Run-of-river and reservoir hydro in remote regions frequently generates surplus beyond what local demand can absorb and transmission economics cannot justify. Cathedral will co-locate at the generation source — converting curtailed output into inference revenue for the operator with zero transmission infrastructure.
Renewable projects in nascent-grid regions routinely curtail output — generating electrons at cost with no buyer. Cathedral's compute infrastructure absorbs variable generation as a flexible, dispatchable load. We operate at variable power levels, capturing output that would otherwise be clipped or spilled.
Associated gas at remote production sites — where pipeline infrastructure does not exist and will not be built — is conditioned and used to generate power on-pad. Near-zero fuel cost. Significant carbon abatement relative to open flaring. The operator gains revenue from a waste stream; Cathedral gains the cheapest electrons on the market.
Mines, processing plants, refineries, and LNG facilities operate backup and standby generation that sits idle for significant portions of operating time. Cathedral deploys alongside existing generation assets — converting idle capacity into inference revenue with no new power plant required.
Markets with high renewable penetration increasingly experience negative or near-zero electricity pricing during off-peak hours. Cathedral modules will function as flexible industrial load — purchasing power at structural lows and monetising it as inference compute across the demand cycle.
The Cathedral infrastructure layer is power-source agnostic. If the price per kWh at the point of use is structurally low, we will build there.
Token sales to frontier AI labs, enterprise AI platforms, and agentic AI operators via OpenAI-compatible APIs. Open-weight models pre-deployed. Priced per million tokens — structurally below any grid-connected alternative. Volume contracts available for anchor customers requiring sustained throughput.
Where Cathedral's power source generates verifiable GHG abatement, the installation produces high-integrity Verified Carbon Standard credits. Continuous MRV instrumentation. Tokenised 1:1 on-chain against Verra registry serial numbers. Institutional-grade, double-counting-impossible.
During inference demand troughs, stranded power assets can be directed to alternative digital compute workloads. The architecture is modular and workload-switchable — ensuring no generated watt is wasted and revenue continues regardless of inference demand cycle.
Surplus generation capacity is sold to adjacent facilities or exported to regional grids where connection is economical. The microgrid architecture enables seamless real-time switching between inference, compute, and power export — maximising the revenue yield of every kWh generated.
The open-weight revolution has permanently restructured AI economics. DeepSeek V4 — a one-trillion-parameter frontier model — is freely deployable. Llama 4, GLM-5, Qwen 3.5, MiniMax M2.7: open, available, running at frontier capability. The model is no longer the scarce resource.
The compute to run it is. Cathedral will own that compute — at the lowest structural cost in the market — and will sell access to it by the token. No model royalties. No vendor lock-in. No proprietary dependency. Pure infrastructure economics.
Frontier labs will also need Cathedral as a distribution partner. When OpenAI scales into a new geography, when Anthropic needs overflow capacity, when a sovereign AI programme requires domestic inference infrastructure — Cathedral will be the structural alternative to building their own remote facility.
Where Cathedral's power source generates verifiable GHG abatement, each installation will produce a continuous stream of high-integrity carbon credits under Verra's Verified Carbon Standard.
Continuous instrumentation will achieve ±3–5% measurement accuracy — the threshold mandated by US EPA regulations and increasingly required by corporate buyers. Each site establishes a baseline emissions scenario and documents verified abatement against it.
Upon issuance, credits will be tokenised 1:1 on-chain. Each token carries a Verra registry serial number. Retirement is instant. Double-counting is structurally impossible. Corporate buyers retire on-chain with immediate registry reflection.
Cathedral is being structured to be among the first platforms using Verra's forthcoming dedicated flare gas methodology — targeting approval in 2026.