Lily Labs logoLily Labs

How we work

Stupid questions that turned out not to be

Every breakthrough started with a question that felt daft.

The ISST pipeline doesn't begin with clever answers. It begins with naive questions that get stress-tested. We publish them because they show how the work actually happens — and because the questions are often more interesting than the answers.

Two layers below: the founding questions that became the theory before any pipeline existed, and the questions the pipeline keeps asking as the work continues. Some founding questions split into multiple readings — when that happens we put both in, calculate both, and see which one survives. That's the whole idea of silly ideas.

Layer 1 · the questions that started it

The three questions that became the theory

These came before any of the formal machinery. Naive prompts that, if you took them seriously enough to write down the equations they implied, didn't collapse on contact. The framework grew out of that not-collapsing.

  1. Founding question 1

    What if gravity is information?

    Sounded stupid because
    Gravity is a force that couples to mass-energy. Information is an abstract bookkeeping concept — accountants don't bend spacetime. Saying “information is gravity” sounded like a category error.
    What it became

    The action itself. The matter Lagrangian carries a multiplicative factor (1 + f), and f is literallythe Kullback–Leibler divergence between a system's actual phase-space distribution and the maximally thermalised version of itself. Information content sources gravitational coupling — not as analogy, as the equation.

    When f = 0 the theory reduces to General Relativity exactly. When f ≠ 0it's a one-parameter deformation of GR with a definite source for the parameter. Eight axioms grow from this single equation. Eighteen problems get tested against it.

  2. Founding question 2

    What if the universe stores information at the lowest state needed for its usage?

    Sounded stupid because
    Physics talks about entropy increasing, not information being stored economically. The universe is profligate, not parsimonious. “Minimum information for usage” sounded like a teleological claim — the universe doesn't budget.
    What it became

    f_primordial. A baseline information content built up over four Standard Model freezeouts (electroweak, QCD, pion+muon decoupling, e⁺e⁻ annihilation), frozen there before BBN, never erased by ordinary processes. Universal floor.

    f_p ≈ 5.664 from the degree-of-freedom count — not tuned, derived. Apply (1 + f_p) to the baryonic density and you get the gravitational excess attributed to dark matter. The comparison closes to 0.18σ. No free parameters. The “minimum information for usage” intuition turned into a derivable number that matches observation.

  3. Founding question 3

    What if processing rate tracks change rate? No changes, no processing. Slow changes, slow processing. Fast changes, fast processing.

    Sounded stupid because
    Time is a coordinate, not a metabolic rate. Voids and walls expand at the same cosmological rate by assumption — the FLRW background is uniform by construction. Saying voids and walls run at different clock rates because they're doing different amounts of computation sounds like a category error twice over: physics doesn't compute, and clocks don't care what's nearby.
    What it became

    The question forks into two readings. We put both in, calculate both, see which one survives. That's the whole point of stupid questions.

    Reading A · processing rate ∝ change rate

    Voids and walls accumulate proper time at different rates because their gravitational potentials differ. This is Wiltshire's timescape cosmology (2007, 2013) and it slots straight onto the bare ISST background. The bare expansion does not accelerate — what we measure as cosmic acceleration is a clock-rate contrast between voids (where photons spend most of their journey) and walls (where the bookkeeping happens).

    Result: dark energy and Λ disappear from the action by construction. The cosmological-constant problem disappears with them. Reading A passed.

    Reading B · time itself emerges from processing demand

    The stronger claim. Not just that gravitational potential modulates clock rate, but that proper time literally emerges from the rate at which information needs to be reconciled — no change, no clock at all. A region of completely static phase space wouldn't accrue time, only relations between events would.

    Status: untested. Reading B predicts a residual lapse contrast that scales with structural-complexity differential (Δf_s) on top of the gravitational-potential contrast Wiltshire already accounts for. No observation has tested for that residual directly. It hasn't been explored or tested yet — saying so here is the work.

Layer 2 · the questions the pipeline keeps asking

What the work itself asks, day to day

Once the framework existed, the pipeline started asking its own naive questions — about its targets, its applicability, even its own validity. Some became headline results. Some became open research threads. Some came back as a clean no.

  1. Entry 1

    Landed result

    What if we're comparing to the wrong target?

    Sounded stupid because
    the observed dark matter fraction is 0.27. Everyone compares to 0.27.
    What the pipeline found
    the comparison target should be the baryonicfraction under ISST's modified gravity, not the total matter fraction. The KL divergence between the actual thermal distribution and a reference Maxwell–Boltzmann gives f_prim — a single number derived from first principles.
    What happened
    f_prim matched the observed gravitational excess to 0.18σ. Eleven F-tasks deep. No free parameters.
  2. Entry 2

    Open thread

    Could our theory help with fusion?

    Sounded stupid because
    ISST is cosmology. Fusion is engineering.
    What the pipeline found
    Stellarator magnetic-confinement optimisation has an open gap between plasma-physics codes and magnetic-field optimisers. The KL divergence framework that drives ISST could bridge it.
    What happened
    Open research thread. Publication gap confirmed.
  3. Entry 3

    Fresh result

    Are they using the wrong figures for galaxy formation?

    Sounded stupid because
    the entire field uses the same standard-model timeline.
    What the pipeline found
    Under ISST's expansion history, galaxies at z = 13 had 633 million years to form, not 330. The “impossible” JWST galaxies aren't impossible — they had twice as long.
    What happened
    No existing publication applies timescape cosmology to the JWST tension. Fresh result.
  4. Entry 4

    Clean no

    Could neutrino mass just be the medium pushing it around?

    Sounded stupid because
    oscillation experiments measure real mass differences.
    What the pipeline found
    KamLAND and solar neutrino experiments give consistent mass-squared differences across vastly different environments. If the mass were purely environmental, they'd disagree. They don't.
    What happened
    Clean no. Parked. Asking the question and getting a clean no is still worth reporting — it means we trust the process enough to publish the failures too.
  5. Entry 5

    Untested

    What if information added to gravity is what makes things leave the quantum world?

    Sounded stupid because
    Standard QM handles measurement with decoherence and the Born rule. Gravity is far too weak to matter at quantum scales. Asking “does information cause collapse?” sounds like consciousness-causes-collapse with a costume change.
    What the pipeline found

    The closest neighbours in the literature all propose gravity-driven collapse — Penrose 1996 (mass superposition self-collapses on τ ∼ ℏ/E_G), Diósi (continuous spontaneous localisation with gravitational noise), and Hossenfelder 2025 (gravity-as-selector via a residual between the quantum state and the field it would source if classical).

    Every one of them keys collapse to mass-energy or geometry. ISST opens a different door: f = α·D_KL[F‖F_flat] keys gravitational coupling to the information contentof the phase-space distribution. So the question becomes well-posed inside ISST in a way it isn't inside Penrose-Diósi-Hossenfelder: as a quantum system entangles with its environment, structural information accumulates, and at some f-threshold the modified gravitational coupling could drive a Penrose-style reduction.

    Distinguishing experiment in principle: matter-wave interferometry on two nanospheres of identical mass and radius but different internal order — crystalline vs amorphous. Penrose-Diósi predict identical visibility decay. ISST predicts a difference fixed by the ΔD_KL between the two samples.

    What happened

    Untested. Two pieces of work would have to happen before this is a real prediction rather than a coherent intuition:

    1. A back-of-envelope on whether α at lab masses (~10⁶ amu) gives any measurable f-modification of G at all, or whether the effect is swamped by ordinary decoherence. ISST's α is currently calibrated on galactic and cosmological data — it might be far too small to matter in a tabletop interferometer.
    2. Extending f from classical phase-space distributions to density matrices. F is defined for classical statistical ensembles; for a coherent superposition you need the density-matrix generalisation before the question is even type-correct.

    Until those are done, this sits on the page as a stupid question we have yet to explore — distinct from existing literature, coherent with the framework, with a known step-one failure mode that could kill it before any experiment is built.

Why we publish the failures

A pipeline that only ever produces “yes” isn't a pipeline — it's a confirmation engine. Several entries above did not survive contact with the evidence; one founding question split in half and only the easier reading passed; one is sitting on the page as a question we have yet to explore. We list the failures, the open halves, and the untested intuitions anyway, because the discipline that makes the “yes” results trustworthy is the same discipline that produces the “no” results — and the “not yet” results — publicly. If you can find a question on this page we should have killed faster, or one we're missing, that helps us.