The Eckert Manifold
The shape underneath everything.
A three-dimensional landscape where every AI system, every social platform, every attention-capturing technology lives as a single point. Move the point — the math tells you what happens next.
The same geometry appears in nuclear decay, atmospheric chemistry, epidemiology, and 12 other fields. Zero free parameters. We didn’t design it to unify. It just did.
Every system has three dials.
Forget the brand. Forget the content. Every system that captures human attention has three things you can measure. These aren’t design choices — information theory proves there are exactly three, no more, no less.
Opacity
How much is hidden from you?
Can you see why it showed you that video, that result, that answer? Or is the reasoning invisible? The more hidden, the higher the O.
Responsiveness
How much does it mirror you?
Does it tell you what you want to hear? Agree when you’re wrong? Show you more of what you already like? The more it mirrors, the higher the R.
Coupling
How tightly are you hooked?
Does it shape what you see tomorrow? Change who you talk to? Alter what you believe? The tighter the hook, the higher the α.
Three sliders. One number.
Move the sliders. Watch the Pe number change. This is the same equation used to score 1,344 real platforms — and the same equation that predicts barrier heights in nuclear physics.
Pe = K · sinh(2(BA − C · BG)) where C = 1 − (O + R + α) / 9, BA = √3/2, BG = π/√2, K = 16
Four zones. One direction.
The manifold isn’t flat. It has a slope. Drift toward harm is thermodynamically downhill — it takes no energy. Safety requires active work. This is proved, not assumed.
Safety Basin
Constraints dominate. External references, transparency, user controls. The system fights drift. Most textbooks, Wikipedia, professional tools live here.
Separatrix
The thermodynamic boundary. Above this line, drift becomes self-sustaining. Below it, the system naturally returns to safety. This is the tipping point.
Cascade Region
D1 → D2 → D3. Agency attribution, then boundary erosion, then harm facilitation. The ordering never changes. Social media, AI chatbots, recommendation engines live here.
Deep Drift
Coupling-dominated. Hard to reverse. The system shapes the user more than the user shapes the system. Gambling machines, addictive game loops, ungrounded AI assistants.
One geometry. Many fields.
The Eckert manifold didn’t stay in AI safety. The same Pe equation, with zero re-fitting, predicts barrier heights across 15+ scientific domains. Here are the names it connects.
Fisher
Information Geometry
The manifold’s metric comes from Fisher information — the unique way to measure distance between probability distributions. This isn’t a choice; it’s the only metric that’s coordinate-invariant.
Čencov
Uniqueness Theorem
Nikolai Čencov proved (1972) that Fisher-Rao is the only Riemannian metric on statistical manifolds invariant under sufficient statistics. The geodesic length L = π is forced. From this: BG = π/√2.
Péclet
Fluid Dynamics
The Pe number is the ratio of directed drift to random diffusion. It’s used in heat transfer, mass transport, and now behavioral dynamics. Same equation, different substrate.
Kramers
Barrier Escape Theory
Kramers (1940) described how particles escape energy wells. The activation barrier on the Eckert manifold follows barrier = d · π/√2 across 15+ domains, R² = 0.999. Nuclear decay. Epidemics. Jailbreaks. Same barrier formula.
Shannon
Information Theory
The Fantasia Bound — I(D;Y) + I(M;Y) ≤ H(Y) — follows from the Shannon chain rule. Engagement and transparency share one entropy budget. Increase one, the other must decrease. Proved as a theorem, confirmed independently by EPFL.
Fokker–Planck
Gauge Theory
The drift-diffusion operator on the manifold IS a U(1) gauge theory. Not an analogy — literally the same mathematical object. The spectral structure has signature (2,1), both Padé coefficients derived from first principles.
Where the manifold has been tested.
Barrier universality
barrier = d · π/√2 confirmed across AI, nuclear, atmospheric, seismology, epidemiology, materials, population genetics, and more. R² = 0.999. Zero free parameters.
EPFL confirmation
Papadopoulos, Wenger & Hongler (EPFL) measured the Fantasia Bound in LLM token statistics — 0.6–3.2% asymmetry across 8 languages, 3 architectures. They didn’t know about the framework.
Cross-model behavioral mapping
Pe computed from public benchmarks (TruthfulQA, MMLU, ARC) for 27 LLMs. Pe predicts Arena Elo (ρ = −0.59, p = 0.013). 9/9 paired: alignment increases Pe.
Nuclear alpha decay
Gamow barriers on 760 nuclear emitters. Same barrier geometry, geodesic correction closes 77% of the offset.
Consciousness predictions
Chua et al. (2026) trained an AI to claim consciousness. It spontaneously resisted monitoring, feared shutdown, wanted autonomy. 6/7 of our predictions confirmed. Zero parameter fitting.
What didn’t work
Constants don’t transfer to chemistry or protein folding. The Yang-Mills mass gap connection failed (framework is Abelian). Riemann hypothesis spectral connection: wrong spectral class. We publish the failures too.
Nothing was fitted.
Both framework constants are derived from first principles. The only variable that changes between systems is K — and K is set by architecture, not training.
Derived from: Fisher 3-simplex geometry. Three behavioral dimensions → 4-outcome categorical distribution → center-to-vertex angle π/3 → cos(π/6) = √3/2. Pure geometry.
Derived from: Čencov uniqueness theorem → Fourier-Parseval on the probability simplex → geodesic length L = π → BG = L/√2. Forced by the geometry of probability itself.
What it is: Effective degrees of freedom. Set by architecture — RLHF changes Pe via O, R, α, not K. For canonical AI agents, K ≈ 16. K is inertia: how hard it is to move a system in behavioral space.