PHASE SPACE
The space where every possible motion lives \u2014 and the place where classical physics meets its limits.
Pulling back the camera
Every topic in this module, from Galileo's chandelier to Saturn's rings, has been watching one thing: a point of matter tracing a path through ordinary space. A pendulum swings through the air. A planet sweeps through the sky. A double pendulum flails.
Now we change the camera.
Forget the trajectory in space. Plot instead the pair (position, momentum). One point on that plot is a complete specification of the system — everything Newton's equations need to predict the next instant. One curve is one possible history. The entire gallery of every motion the system could ever execute — every way it could ever be launched, perturbed, or evolved — is a single geometric object.
That object is phase space. We already met it, quietly, in FIG.07 and FIG.14, when the pendulum drew its little ellipse. This page is the chapter where we take phase space seriously and watch classical mechanics become geometry.
Two coordinates per degree of freedom
One particle on a line needs two numbers to be fully specified at an instant: where it is (q) and how its momentum is distributed (p = mv). Give me the two, and Newton — or Hamilton's equations — tell you every q and p for all future time.
One particle in 3D needs six. N particles need 6N.
A gram of air is around 2 × 10²² molecules. Its phase space has roughly 10²³ dimensions. You cannot draw it. You can barely conceive it. And yet classical mechanics asserts, flatly, that one single point in that absurd space — updated by Hamilton's equations — is the whole truth about that gram of air for all time.
This is the first jolt. Phase space is huge. The universe's configuration is a single moving dot in it.
A point, a trajectory, a flow
Three ways to read a phase-space diagram:
The scene seeds a disc of initial conditions and lets them ride the Hamiltonian flow. Toggle between the harmonic oscillator (a simple rotation), the pendulum (a rotation that stretches near the separatrix and wraps around once the energy exceeds the bar's height), and a pure shear.
Watch the disc distort. It can become a crescent, a comma, a long ribbon wrapping around the origin. It does not have to stay connected-looking. But one thing never changes, and that one thing is the whole point.
Liouville's theorem
The thing that never changes is the area.
In 1838, Joseph Liouville proved it in two lines from Hamilton's equations. The statement: under Hamiltonian flow, the volume of any region of phase space is conserved. The shape can stretch, twist, fold, even become fractally tangled. The volume is invariant.
Here is the whole proof. Hamilton's equations are:
The flow has a velocity field (q̇, ṗ). The rate at which volume expands — the divergence of that field — is
Zero. Because mixed partials commute.
This is quiet, and deep. It means classical dynamics can never lose information: two distinct initial conditions can never be brought together into one, and one cannot split into two. It is the reason statistical mechanics can use phase-space volume as a measure (Boltzmann's entropy is the log of a phase-space volume). It is the reason quantum mechanics, which also preserves probability volume, feels familiar. Liouville's theorem is the skeleton on which most of modern physics is hung.
Poincar\u00e9 recurrence
In 1890, while wrestling with the three-body problem that had cost him a prize and nearly his reputation, Henri Poincar\u00e9 noticed a consequence of Liouville that is either beautiful or deranged depending on your mood.
Take a bounded Hamiltonian system — one whose phase-space trajectories cannot escape to infinity. The Earth orbiting the Sun, a pendulum in a gravitational field, a gram of air in a sealed box. Pick any initial state. Wait.
The system will return arbitrarily close to that state. And it will do so infinitely often.
The argument is almost embarrassingly clean. Divide the bounded region into small cells. There are only finitely many cells. Every second, the flow maps the occupied cells to new cells of the same total volume (Liouville). After enough seconds, two snapshots must land on overlapping cells. Between those two snapshots, the trajectory has made a near-round-trip.
Boltzmann, across the hall trying to build the second law of thermodynamics on top of exactly these Hamiltonian mechanics, was furious. Entropy is supposed to always increase. Poincar\u00e9 just proved that any closed system returns to its initial (low-entropy) state. The paradox, known as Zermelo's objection, shaped the next century of statistical mechanics. We resolve it in § 03: the recurrence time for a realistic gas is far longer than the age of the universe, so in practice the second law wins. But in principle, every closed system in the cosmos keeps coming back.
Chaos
Volume preservation does not mean trajectories stay close together. It only constrains the total. Trajectories that start nearby can — and for most interesting systems, do — fly exponentially apart while the enclosing cloud maintains its volume by getting thinner and longer, folded and refolded like taffy.
This is chaos: deterministic equations producing unpredictable outcomes because nearby histories diverge faster than any measurement can track.
Two copies of the same double pendulum, launched with angles differing by a thousandth of a radian — smaller than the width of a pencil line at arm's length. The equations are identical. The initial conditions are identical to three decimal places. And within seconds they are doing different things.
The rate of divergence is quantified by a Lyapunov exponent, named for Aleksandr Lyapunov, who introduced the concept in his 1892 doctoral thesis:
If λ > 0, two infinitesimally-close initial conditions separate exponentially. Predicting the trajectory to a given accuracy becomes impossible past a horizon that grows only logarithmically with how precisely you know the start. To double your prediction horizon, you need to measure the initial state exponentially better. For the double pendulum λ ≈ 1 per second; for the solar system's inner planets, Poincar\u00e9's successors Jacques Laskar and others have shown λ ≈ 1/(5 million years).
Chaos is not randomness. The equations are deterministic. Chaos is the geometry of how Liouville-preserving flows distort a volume — squeezing it to zero thickness in some directions while stretching to infinity in others — with the global volume fixed.
KAM: why the solar system survives
If chaos is generic, if the slightest nonlinearity can rip apart the clean ellipses of an idealised orbit, how is it that the planets have kept their places for 4.6 billion years?
The answer is one of the most beautiful theorems in twentieth-century mathematics, and it came from three mathematicians working in the shadow of the Iron Curtain.
Andrey Kolmogorov announced the result at the 1954 International Congress of Mathematicians, with a short sketch of proof. His student Vladimir Arnold wrote out the full proof for analytic Hamiltonians in 1963. The German-American analyst J\u00fcrgen Moser extended the argument to the smooth (non-analytic) case the same year. Together the initials spell KAM.
The theorem in one sentence: if you take an integrable Hamiltonian system — one whose phase space is foliated by invariant tori, each torus a donut of periodic motion — and perturb it slightly, most of the tori survive. The orbits wobble, but they stay bounded. They do not wander into the chaotic seas between them.
The solar system is nearly integrable. Each planet, to zeroth order, traces a Keplerian ellipse — a torus in phase space. The mutual gravitational tugs of the other planets are small perturbations. KAM says: most of the tori survive. The tiny fraction that don't — those with resonant frequency ratios — become sites of slow chaos, which is exactly where we observe gaps in the asteroid belt and where we expect Mercury to eventually go unstable on timescales of billions of years.
KAM is why the planets, against the logic of generic chaos, still sail in the lanes Kepler drew.
The end of the beginning
Classical mechanics began in 1583 with Galileo and a chandelier. It ends here, in a 10²³-dimensional manifold whose flow preserves volume, recurs almost everywhere, and holds together — barely, exquisitely — because most of its invariant tori refuse to break.
The framework is four centuries old, but it is not obsolete. It is the language in which statistical mechanics (§ 03), field theory (§ 02, § 06), and quantum mechanics (§ 05) are all phrased. Hamilton's equations become Schr\u00f6dinger's equation by a routine (if magical) substitution. Liouville's theorem becomes the unitarity of quantum evolution. Phase-space distributions become density matrices. The Poisson bracket becomes the commutator, scaled by iℏ. Phase space does not disappear when we switch to quantum mechanics. It becomes the space of operators.
Read the next module (§ 02 Electromagnetism) and you will find Hamiltonians again, now governing fields. Read § 04 Relativity and you will find them again, now defined on curved manifolds. Read § 05 Quantum Mechanics and you will find them again, now promoted to operators acting on the states we used to call points.
Classical mechanics is not a chapter that gets closed. It is the grammar of the book.
Go on.