One field to render them all
Why a single scalar ψ(x,t) can replace the triangle mesh, the physics engine, the lightmap, and the voxel grid of a classical game engine — and what makes it simultaneously simpler and more correct.
The classical engine, drawn honestly
A modern game engine carries five independent representations of the world:
| Subsystem | Data structure | Queried by |
|---|---|---|
| Renderer | Triangle mesh + BVH + textures + lightmaps | Every pixel shader |
| Physics | Rigid bodies + collision shapes + broadphase | Every simulation step |
| Lighting | Irradiance probes + reflection captures | Every shaded surface |
| Destruction | Voxel grid + chunk tree | Every edit |
| AI navigation | NavMesh + flow fields | Every NPC step |
These are five approximations of the same thing: what exists, where, and how it behaves. Each was designed by a specialist team to be good at one kind of query. None of them agree with each other exactly. Enormous engineering effort goes into keeping them in sync as the world changes.
When you dig a hole in Minecraft, the chunk voxels update, the meshing remakes a triangle mesh, the lighting recomputes, the physics rebuilds its collision shape, and the server replicates the edit. Five updates for one logical event. The better engines hide this under good tooling. The worst engines show it to you as a visible stutter.
Density Field Dynamics
DFD is the physics theory behind this engine. It says: space is flat ℝ³. A single scalar field ψ(x, t) permeates it. Gravity, optics, and the passage of time all emerge from that one field.
Two postulates:
P1 (light): Electromagnetic waves propagate along null geodesics of the optical metric
ds² = −c²dt²/n²(x, t) + dx², n(x, t) = e^ψ(x, t)
which means light curves through a medium of spatially-varying refractive index n = e^ψ.
P2 (matter): Test bodies move under the acceleration
a = (c²/2) ∇ψ
from the potential Φ = −c²ψ/2. This reproduces Newton's law in the weak field.
The field itself is governed by a nonlinear elliptic equation sourced by mass density ρ:
∇·[ μ(|∇ψ|/a★) ∇ψ ] = −(8πG/c²) (ρ − ρ̄)
where μ(x) = x/(1+x) is a crossover function whose form is derived from the topology of an internal manifold (S³ Chern–Simons quantization in the paper's microsector), not fit to data. The theory has zero free parameters once H₀ is measured. Every observation that has ever been made of gravity — from solar-system precision tests to binary pulsars to LIGO gravitational waves to Event Horizon Telescope black-hole shadows — is consistent with what DFD predicts in those regimes.
What this means for an engine
Every physical query a game engine ever asks is a query about ψ:
- Render a pixel? March a ray through ψ via the eikonal equation. Light bends where ψ is strong. No meshes, no BVH.
- Apply gravity? Sample ∇ψ at the body's position. a = (c²/2)∇ψ. No collision solver, no constraint LCP.
- Is this point in solid matter? Sample ρ at that position. That's the hit test. No collision mesh.
- How fast do clocks tick here? Locally, dt_proper = dt/n = e−ψ·dt. Gravitational redshift is free.
- How does light bend around a mass? The same eikonal integrator that renders the frame bends it correctly, by the same physics that produces the gravity you fall in. They can't disagree.
The payoff is that all five data structures above collapse into two:
| Field | Meaning | Written by | Read by |
|---|---|---|---|
| ρ | The stuff. Where matter exists. | World gen, player edits | Solver (source), raymarch (hit test) |
| ψ | The field the stuff creates. | Jacobi solver, from ρ | Raymarch (lensing), agent physics (gravity) |
ρ is the only thing you ever write. ψ is always derived. No synchronization problem, because there is no second thing to stay in sync with the first.
The consistency you get for free
In a classical engine it is genuinely possible for a light to shine through a wall the physics treats as solid. The rendering lightmap was baked from one version of the scene; the physics collision mesh was exported from another; they drift. This is a famous class of bug in every engine.
In DFD, the renderer reads the same ψ that the physics reads. If light bends around an object, matter falls toward that object, because both behaviors are the same gradient of the same field. They cannot disagree. The consistency is enforced by the math, not by a careful build pipeline.
Theorems from the paper make this precise:
- Existence + uniqueness (Theorem III.1, III.2): given ρ and boundary conditions, ψ exists and is unique. There's never ambiguity about "which field we're rendering."
- Causality (Theorem III.6): all characteristic speeds are ≤ c. An edit to ρ propagates into ψ at the speed of light. In the engine this manifests as a visible ripple — correct physics and a free aesthetic effect.
- Energy positivity (Theorem III.7): the energy functional is convex. This is why the Jacobi solver always converges. Classical rigid-body constraint solvers are non-convex LCPs; they can explode. DFD can't.
The scaling property
Because ψ falls off as 1/r outside mass (its analytic far-field solution), a game using ψ as its world representation has a genuinely unusual property: rendering and physics cost do not scale with world size.
Classical engines have to stream chunks, swap LODs, rebuild BVHs, reproject lightmaps. All of that exists because the representation is local-only — you can't query the world outside the chunks you've loaded. DFD has an analytic exterior. Beyond the locally-simulated bubble, ψ is given by
ψ_far(x) = Σᵢ 2 G Mᵢ / (c² |x − rᵢ|)
which is a one-line sum over the big masses in your world. The same formula that works for a 40-meter planet works for a 6,000-kilometer planet, works for a solar system, works for an open MMO world. The simulation bubble follows the player at constant computational cost; everything outside the bubble uses the analytic extension. There is no streaming problem because there is nothing to stream — just a formula and a few source positions.
The smallest possible world
I'll close with the thing that makes this project feel right to me. Every game engine is, at some level, trying to simulate a universe. The question every engine architect answers is what is the smallest representation I can get away with? Triangle meshes are small — but you pay with collision-mesh drift, lightmap drift, BVH rebuild cost. Voxels are small — but you pay with seams, aliasing, and N³ memory. SDFs are smaller — but you pay with clipmap refill cost and no analytic far-field.
The actual smallest representation, the one that has been running the universe for 13.8 billion years, is one field. You can't go below one field without losing information. And when you're simulating on top of that field, everything the field already does becomes a feature of your engine for free. Gravity, time, light, inertia, matter interaction — all emergent, all consistent, all instantaneously available everywhere.
The next post is about what happens when you actually build the solver and check if the physics works. Spoiler: four tests, all pass.