About — written for everyone
What if gravity doesn't just depend on how much stuff there is — but on what that stuff has become?
A library and a warehouse can hold the same weight of paper. The library is full of books — sentences, references, indexes, chapters that point at other chapters. The warehouse is full of blank pages. Same atoms. Same mass. One has structure and history; the other doesn't.
Standard physics says gravity does not care about the difference. A kilogram of books pulls on you exactly as hard as a kilogram of blank pages. Mass is mass.
ISST is a research project asking whether that's actually true. The proposal is that gravity might depend, slightly but measurably, on how much processing the matter has been through — how organised it is, how much history it carries. If that's right, several of the biggest unsolved problems in cosmology stop looking like missing ingredients and start looking like accounting errors.
This page is the plain-language version, deliberately written for someone who is curious about a headline they saw, not someone who already speaks the maths. If you want the equations and the working, those are on the other pages.
The three things ISST is actually saying
The headlines are big. Each of them is a claim that something the textbooks treat as solid is actually a misreading. You can disagree with all three; we'd rather you disagree usefully than nod along.
Claim 1 — about dark matter
We don't think there's an invisible particle.
Galaxies spin too fast for their visible mass. Clusters of galaxies need roughly five times the matter we can see. The standard fix has been to assume the universe is full of an invisible particle — “dark matter” — that nobody has ever caught in a detector after fifty years of looking.
ISST asks: what if the matter we already see — stars, gas, planets — pulls harder than expected because its internal structure contributes to gravity? Same atoms, but the gravity they make depends on what they've been through. No new particle. The 84/16 split between “dark matter” and “ordinary matter” that cosmology measures becomes a consequence of one number — how much processing the universe has done since the early hot phase — not two separate ingredients.
Claim 2 — about dark energy
We don't think the universe is being pushed apart by a mysterious force.
In the late 1990s, observations of distant supernovae suggested the universe isn't just expanding — it's expanding faster and faster. The standard fix called this “dark energy”, accounting for about two-thirds of everything in existence, with no microscopic explanation.
ISST's answer is more pedestrian: clocks run at slightly different speeds in different parts of the universe. The vast emptier regions — “voids” — accumulate time faster than the dense filaments and clusters where matter lives. When we measure expansion using light that has crossed mostly voids on its way to us, our calculation of the expansion rate gets thrown off. It looks like the universe is accelerating from where we stand. It isn't. We're just averaging over clocks that run at different rates and not correcting for it. This idea is an existing one (David Wiltshire's timescape cosmology, 2007); ISST shows it falls out naturally from the underlying mechanism.
Claim 3 — about the cosmological constant
The biggest unsolved number in physics goes away.
When you ask quantum theory how much energy is sitting in empty space, it gives you a number that is wrong by a factor of about 10120. That's a 1 followed by 120 zeros. It is widely considered the worst prediction in the history of physics. The standard model of cosmology has no explanation; it just chooses the small observed value by hand.
The reason the discrepancy is a problem is that we believe there's a cosmological constant — a built-in property of empty space — driving the expansion of the universe. If Claim 2 is right and the apparent acceleration is a clock-rate effect rather than a real fluid pushing space apart, then there isn't a cosmological constant to be wrong about in the first place. The 120-orders-of-magnitude problem dissolves. There is no number to get wrong.
What physicists call this
Now that the idea is on the table, here are the labels.
- The thing that varies — the “how much processing has happened around here” quantity — is called a scalar field, written Ψ (psi). It's a single number defined at every point in space. A high value means a long history of structure forming; a low value means a quiet patch of universe.
- The relationship between Ψ and the strength of gravity is G = 1/Ψ. A bigger Ψ means weaker gravity. That's why gravity in our 13.8-billion-year-old universe is so much weaker than the other forces of nature: the universe is old, Ψ has grown large, gravity has correspondingly weakened.
- The bookkeeping fact that “the matter we see pulls harder than its mass alone would suggest” is encoded in a multiplicative factor written (1+f), where f is small for a hydrogen cloud and larger for a galaxy. When f = 0, ISST reduces exactly to Einstein's general relativity. ISST is one ingredient added to GR, not a wholesale replacement.
- The full mathematical statement — the “rule book” for the whole theory — is a single line:
Don't worry if the symbols are unfamiliar. This single line, called the “action”, is a complete description of the theory. Everything on this site — every prediction, every comparison to observation, every framework the engine evaluates — is a consequence of those nineteen characters.
What this is, and what it isn't
- It is an unreviewed proposal. No journal has assessed it yet. No outside group has tried to break it. We are publishing it precisely so that someone can.
- It is a single self-consistent theory. Every prediction on this site comes from the same one-line rule above. Nothing is patched in.
- It is not finished. Of the eighteen long-standing cosmological problems we track, nine are resolved if the theory holds, six are partly resolved with more work to do, and three are inherited or untouched. We mark them honestly so you can see which is which.
- It is nota discovery announcement. We're not asking you to believe anything. We're asking you to look.
Who is working on this, and how
ISST is a research programme at Lily Labs. The principal investigator is Steve Brailsford. The mathematical working, numerical verification, and adversarial testing are done by an AI research collaborator (“Lily”) operating under a documented dual-instance methodology. The human asks the questions and decides what to pursue; the AI derives the consequences, checks the consistency, and flags the failures. There is a methodology note — How the ISST paper got made — for anyone interested in the working pattern.
The manuscript is in active development; pre-submission review is ongoing. We don't link the draft from this page. Procurement enquiries, technical due diligence, academic collaboration — contact us.
How this site is built
The narrative pages are plain HTML, so they read cleanly without JavaScript and can be picked up by AI search assistants. The live engine is the interesting part: it runs the same TypeScript validation code we use internally to check our own derivations, but in your browser. Toggle a property, watch eighty candidate physical frameworks recompute their compatibility verdict deterministically. Nothing leaves your device.
Engine snapshot: derivation-passport@0.1.0 (synced 2026-04-25).
Pre-committed
We have published, in advance, the conditions under which we will retract these claims.
They are listed in the right-hand sidebar on every page in this section. If any of them turn out to be true, ISST does not survive — and we say so on the way in, not after the data lands. Science that can't be broken isn't science.