More Than the Sum of the Parts : Complexity in Physics and Beyond Helmut Satz
More Than the Sum of the Parts : Complexity in Physics and Beyond Helmut Satz
More Than the Sum of the Parts : Complexity in Physics and Beyond Helmut Satz
More Than the Sum of the Parts : Complexity in Physics and Beyond Helmut Satz
1. Download the full version and explore a variety of ebooks
or textbooks at https://ebookmass.com
More Than the Sum of the Parts : Complexity in
Physics and Beyond Helmut Satz
_____ Follow the link below to get your download now _____
https://ebookmass.com/product/more-than-the-sum-of-the-
parts-complexity-in-physics-and-beyond-helmut-satz/
Access ebookmass.com now to download high-quality
ebooks or textbooks
2. We have selected some products that you may be interested in
Click the link to download now or visit ebookmass.com
for more options!.
MORE THAN I COULD Locke
https://ebookmass.com/product/more-than-i-could-locke/
More than a Companion Rose Pearson
https://ebookmass.com/product/more-than-a-companion-rose-pearson/
John of Damascus: More Than a Compiler Scott Ables
https://ebookmass.com/product/john-of-damascus-more-than-a-compiler-
scott-ables/
John of Damascus: More Than a Compiler Scott Ables
(Editor)
https://ebookmass.com/product/john-of-damascus-more-than-a-compiler-
scott-ables-editor/
3. More than Meets the Eye: What Blindness Brings to Art
Georgina Kleege
https://ebookmass.com/product/more-than-meets-the-eye-what-blindness-
brings-to-art-georgina-kleege/
I'm More Dateable than a Plate of Refried Beans Ginny
Hogan
https://ebookmass.com/product/im-more-dateable-than-a-plate-of-
refried-beans-ginny-hogan/
More Judgment Than Data: Data Literacy and Decision-Making
Michael Jones
https://ebookmass.com/product/more-judgment-than-data-data-literacy-
and-decision-making-michael-jones/
More than a Governess: A Regency Romance Rose Pearson
https://ebookmass.com/product/more-than-a-governess-a-regency-romance-
rose-pearson/
Eroticism of More- and Other-than-Human Bodies: A Study of
the Anthropology of Things 1st ed. 2020 Edition Gra■yna
Gajewska
https://ebookmass.com/product/eroticism-of-more-and-other-than-human-
bodies-a-study-of-the-anthropology-of-things-1st-ed-2020-edition-
grazyna-gajewska/
10. The laws of physics are simple, but nature is complex.
Per Bak
How Nature Works, 1996
12. Contents
1. Introduction 1
2. The Flow of Time 10
3. Global Connections 21
4. The Nature of Forces 27
5. The Formation of Structure 40
6. The Energy of Space 47
7. Critical Behavior 59
8. Self-Organized Criticality 72
9. Fractal Dimensions 78
10. Bifurcation and Chaos 85
11. Brownian Motion 97
12. Turbulence and Convection 105
13. Intermittency 116
14. Words and Numbers 121
15. Quantum Complexity 127
16. Conclusion 136
Bibliography 139
Person Index 141
Subject Index 143
13. Preface
Much of our natural science is based on the supposition that the whole is
the sum of its parts. This assumption has in fact worked amazingly well
and has provided us with a body of scientific knowledge on which much
of our modern world is based. In recent years, however, it has become
more and more evident that there is an immense number of phenomena
for which this assumption does not hold. For many years, we got along
by saying “let us assume the state to be in equilibrium” or “let us assume
motion without friction”, and more. Today we see that all the neglected
phenomena form a new field of study, complexity, which is as great or
greater than that considered so far in our conventional natural science.
Moreover, there turns out to be a considerable degree of universal-
ity for complex phenomena: complexity is observed in a vast variety of
phenomena in nature. In conventional physics, many concepts are appli-
cable only to issues that arise there, and to not much more. On the other
hand, the behavior observed for the onset of criticality, leading to corre-
lations between even very distant constituents—this behavior arises not
only in the study of magnetism in condensed matter physics, but as well
in the cosmology of the early universe and in the formation of flocks of
birds. Related patterns are found in economic developments, stock mar-
ket fluctuations, population growth, the spread of diseases. And by now,
critical behavior has also become a part of conventional physics.
Not surprisingly, this had led to the appearance of complexity theory
as a new field of research, with new fundamental concepts, such as emer-
gence, self-organization, bifurcation, and more. This book is not meant
as an introduction to complexity theory—for that, we refer a number
of excellent works listed in the bibliography. Our aim here is to illus-
trate a variety of different phenomena in nature for which conventional
science cannot give a satisfactory account, and to show that they gener-
ally arise as a result of collective many-body interactions. In some cases,
these could be codified into aspects of an emergent complexity theory,
14. preface ix
in others not. Another possible subtitle for the book would thus be “un-
conventional physics”. My aim is more to describe, to point out, rather
than to strive for a general theory. The many-faceted picture presented
by the different phenomena will hopefully serve as a challenge to future
scientists, and to them I leave the formulation of a possible science of
complexity.
The presentation here will be restricted to phenomena encountered in
nature, to questions addressed to natural science. We will not consider
issues in sociology, politics or economy; these are clearly well outside
the range of competence of a theoretical physicist, even if and when they
lead to fluctuation patterns not unlike those in physics. Nevertheless,
with complexity being such a novel subject, mathematicians, physicists,
and others, can well provide essential contributions to its understanding
and its future development. Crucial steps—critical behavior (Ken Wil-
son), the approach to chaos (Mitchell Feigenbaum) and self-organization
(Per Bak), came from physicists. Different views of the subject come
from different orientations, and mine in certainly that of a physicist. In
any case, the field thus is far from closed, and so I dare to give this pre-
sentation, although the subject of the book overlaps only partially with
my own field of work, the thermodynamics of strong interactions. In re-
cent years, however, complexity has been found to play an increasing
role also here, and besides this, I have, been in touch with the topic and
its initiators for many years.
The development of physics is quite naturally associated with the
names of those who brought about the crucial changes—Galileo,
Newton, Maxwell, Boltzmann, Planck, Einstein and more. The new
paradigms associated with the physics of complex systems have not yet
made their pioneering inventors well-known to the general public. In
line with showing how the study of complex systems is changing our
thinking, it therefore seems natural to point out who started it all. My
private, physics-oriented list starts with the three names already men-
tioned: Kenneth Wilson, Mitchell Feigenbaum and Per Bak; it continues
with many more, of course. These three have pointed out the road in
physics which we now follow, and although they are all no longer with
us, I have known all of them personally. So in a way, this is my memorial
for them.
15. x preface
The book is meant for a general audience, interested in the new per-
spectives opening up now in the study of systems consisting of many
similar or identical constituents. It is not a treatise, but rather an attempt
to convey the discovery that a great variety of such systems lead to novel
behavior due to collective interactions of the parts, that the whole is more
than the sum of the parts.
References for additional reading, general developments and further
information are provided in the bibliography at the end of the book. In
particular, I include here presentations of the status of the rapidly grow-
ing theory of complexity. Besides this list of general as well as specific
references, I cite in addition at the end of several chapters some books or
articles of particular relevance for the specific topic of that chapter. One
topic which is quite closely related to the present work is the structure of
animal swarms; this is indeed quite similar to that of many-body systems
is physics. I have not included it here, since it is covered in detail in my
recent book “The Rules of the Flock”, which was published last year by
Oxford University Press.
I had the pleasure of discussing different aspects of the topic with var-
ious colleagues, and so my thanks for stimulating remarks go to Andrzej
Bialas (Krakow), Philippe Blanchard (Bielefeld), Paolo Castorina (Cata-
nia) and Frithjof Karsch (Bielefeld). And I much appreciate notes by
Shaun Bullett (London) and Bob Doyle (Harvard). Without the inter-
ference of Corona I could perhaps have had the pleasure of discussing
with them in person. I hope that the future will bring such possibilities
back.
I dedicate this book to the memory of my wife Karin.
Bielefeld, November 2021
Helmut Satz
16. 1
Introduction
Du siehst den Wald vor lauter Bäumen nicht.
(You don’t see the forest because of all the trees.)
German proverb
Divide and Conquer
From cave drawings to space telescopes, mankind has always, in one way
or another, tried to figure out the world in which we live. We want to
understand what things are made of, how they function, what different
forms they can take, and what the forces are that they experience. For
the past two millennia and more, the physical sciences, with much help
from mathematics, have developed an extremely successful approach
to addressing and answering these questions, best summarized in the
old Roman advice “divide and conquer.” Instead of looking at the com-
plex overall picture, we single out a small part of the whole and try to
understand its workings. If we succeed, we then put many such parts
together to arrive at an understanding of the larger scene. This philos-
ophy started in ancient Greece, when the idea of atoms was introduced
as the ultimate building blocks of matter, and the multi-faceted world
was attributed to the different ways these constituents were put together.
The approach of reduction to ultimate parts has, over the past centuries,
led to a well-defined atomic structure, first with atoms made of nuclei
and electrons, though the nuclei were then found to consist of nucleons
(protons and neutrons), and these in turn of quarks. The interaction be-
tween the different constituents is mediated by electromagnetic as well
as strong and weak nuclear forces, and in the past few decades, it has
17. 2 more than the sum of the parts
become possible to combine all the components and the corresponding
forces into one unified description, the so-called standard model of ele-
mentary particle physics—the closest to a “world formula” that we have
ever had. The only force which has so far resisted a final unification with
those of the standard model is gravity. In spite of many attempts by some
of the most prominent physicists, such a “theory of everything” is still
missing. Very recently, in fact, it has been suggested that the reason for
this might be that gravity is of a fundamentally different nature from the
other forces; we shall come back to this truly new view of things later.
Philosophers in ancient Greece provided not only the start of
reductionism—they also warned that there are limits to this way of
thinking. This is perhaps best summarized by Aristotle, when he noted
that the whole is more than just the sum of its parts. In the process
of reduction to ultimate constituents some of the features of the whole
will necessarily be lost, and it is not clear if the specific way in which
we subsequently recombine things will lead back to what we started
with. Reduction and recombination are the yin and yang of the world,
complementary but opposing, and understanding one does not imply
understanding the other.
The success of reductionism has for many years overshadowed the
other side of the coin: if we have the building blocks, how can they be
put together, and what can be built from them? The knowledge of the
structure of the atom still left most of the behavior of bulk matter un-
explained, just as the anatomy of a bird tells us little about the behavior
of flocks of birds. In many forms of matter, constituents separated far
from each other are completely uncorrelated, and idealized systems of
this kind can indeed be taken apart and recombined, in order to account
for much of the observed behavior. For them, and as we shall see, only
for them, the whole is simply the sum of the parts.
The Onset of Complexity
At the transition points from one state of matter to another, for evapora-
tion, melting, freezing and more, this reductionism ends; the system now
refuses to be divided into independent subsystems—distant constituents
18. introduction 3
are now connected, everything becomes correlated. Physicists, irritated
by this complication, called it critical behavior. Today we realize that
there are more and more phenomena that make sense only when sys-
tems of many constituents undergo such devious doings. A single atom
cannot freeze or evaporate. Such phenomena signal the beginning of
complexity; the whole is now definitely more than the sum of its parts.
It has thus become more and more evident that an understanding
of the nature of elementary particles and the forces between them, the
ultimate reductionist world formula, the “theory of everything,” is not
sufficient for a full understanding of the behavior of systems of very
many such particles. The opposite approach, the combination of con-
stituents to form complex systems, turns out to have its own distinct laws;
moreover, these often depend very little on the nature of the constituents
and their interactions, and they thus are generally quite universal. The
magnetization of iron, the condensation of a gas, the formation of a
galaxy, or even that of a flock of birds—these all lead to very similar
structural patterns. In this sense, the truly new physics in the past 50
years is the universal science of collective behavior, dealing for the first
time with systems that are no longer simply the sum of all the smaller
subsystems. The formulation of a theory of critical behavior through
renormalization, the scale invariant behavior of systems of all sizes,
brought the Nobel Prize in Physics in 1982 to the American theorist,
Kenneth Wilson. Some years before, in 1977, the Russian–Belgian the-
orist, Ilya Prigogine, had already received this Prize for showing that
dissipative behavior could lead to new collective structures. The world
is full of physical phenomena that only manifest themselves in many-
body systems. Such phenomena are the topic of this book, and I want to
address them in a way accessible to the general interested reader, with
a minimum use of mathematics. As you will see, there are times when I
have found it impossible to avoid mathematics completely; in the words
of a former president of our university, it is the language that God uses
when he wants to speak to humans. And as with most languages, even
if you don’t understand every word, you can often still follow the ar-
gument. The book is not meant as a scientific treatise, but rather as a
narrative, telling the reader how the fascinating concepts of complexity
emerged in our understanding of the physical world.
19. 4 more than the sum of the parts
We should, of course, begin by defining what is meant by complex-
ity. Unfortunately, this is harder than it seems, and there are various, not
always compatible, definitions. Our starting point is obviously a system
of many interacting simple constituents. If the behavior of that system
is uniform, if any two subsystems even far apart show the same form of
behavior, then we consider the overall system as simple, as opposed to
complex. Examples of simple systems of this kind are dilute gases, liq-
uids at rest, and crystals. However, they remain simple only if left alone,
in equilibrium; introducing temperature gradients, fluid flow, friction,
stress and more turns them into complex systems, and it is for precisely
this reason that in traditional classical physics such effects were usually
assumed to be negligible. Simple systems are generally found to follow
deterministic laws in a way that allows unique predictions for the change
of state under a slight change of control parameters. Increasing the vol-
ume of an ideal gas leads to a predictable change of its pressure. In simple
systems, small causes lead to small effects. In complex systems, the in-
dividual constituents are generally also subject to deterministic laws,
but collective effects result in unexpected new macroscopic behavior.
A temperature change of one degree can turn water into ice.
We shall see in more detail that systems of many identical parti-
cles can be combined to form different structures, and the transition
from one such structure to another leads to critical behavior. More-
over, the collective behavior of many particles can produce effective
“emergent” forces, which cannot be reduced to a pairwise interaction be-
tween individual constituents. These two aspects in a way define the topic
of this book: critical behavior and emergent structures in many-body
systems.
In the past few decades, it has been found that complex systems also
show regularities, and even obey general laws. The transfer of heat in
liquids shows striking flow effects, from convection patterns to chaos.
As already mentioned, the development of mathematical formulations
for such phenomena has produced a remarkable universality, with di-
verse applications from turbulence in fluids to the evolution of animal
populations. It is for this reason that the structure of complex systems is
quite often independent of the nature and interaction of its constituents.
And there is a limit to the regularities of complex systems: if many-body
20. introduction 5
systems show completely irregular and unpredictable behavior, if even
an infinitesimal change of parameters leads to completely unexpected
large-scale effects, then we speak of chaos. Simple as that seems, chaos
shares with complexity the lack of an unambiguous definition. How do
you measure the absence of order?
Since complexity is a very new field of research, it deals with diverse
phenomena which at first sight (and often also at second) appear rather
uncorrelated. For this reason, it seems unavoidable to present in this
book an assortment of topics in a rather unstructured way—the logical
pattern putting everything “in order” is not yet in place, and remains a
challenge to future science. For the time being, we are confronted with
a variety of concepts—critical behavior, self-organized criticality, emer-
gence, intermittency, scale-free behavior, chaos, turbulence and many
more; these concepts are obviously related, but much of the time it is not
clear how.
I should note here that there already exists a new field of research,
called complexity theory. Its aim is the study of mathematical models
that describe complex systems, in some cases successfully, in others not
yet. We will touch on these efforts from time to time, but the essential
aim here is to introduce a variety of patterns of complex behavior found
in nature, whether or not we have a theoretical framework to account
for them. We want to look at phenomena that are not amenable to a
description by the traditional methods of standard natural science, and
we want to see if we can somehow begin to understand their patterns,
even if we don’t yet have a theory. And to restrict the issue, we shall in-
deed deal with phenomena in nature—leaving side questions arising in
politics, economics, the stock market and more, questions that involve
complexity and that are addressed by complexity theory. We want to look
at complexity in nature.
Hints of Novel Behavior
To introduce the field, we begin with an issue that has been with us for
many years. In physical science, we have progressed from the atoms of
antiquity to the periodic table of elements, to atomic structure, to the
21. 6 more than the sum of the parts
formation of nuclei out of protons and neutrons, and on to the quark
substructure of these nucleons. In all of the two thousand years in which
this reductionist understanding has evolved, we still have not arrive at
a satisfactory answer to the simple question why time in our world, in
history as well as in cosmology, has a well-defined direction and never
runs backwards. Evidently this is in many ways a much more fundamen-
tal issue than the grand unification of quarks and leptons in elementary
particle theory—but it is an issue which requires new ways of thinking
as well as new empirical input. We shall see that time as we know it arises
as a collective effect in many-body systems.
Complex systems, as we had noted, very often cannot be separated into
uncorrelated subsystems: even distant constituents are somehow still
connected. Next we therefore address the simplest possible case of the
formation of connectivity: percolation. We shall show that with increas-
ing density, even randomly distributed objects undergo critical behavior,
from isolated entities to global connectivity. Besides the conventional
critical behavior, magnetization, condensation and more, we have in the
past few years discovered another, geometric form of criticality. In Asia,
this has been present for millennia in the form of the game of Go, but its
more general structure has appeared as percolation only some hundred
years ago. In its poetic version, it deals with water lilies on a pond. How
many randomly placed lilies do we need to have in order to allow an ant
to cross the pond without getting its feet wet? Here as well, nature does
make a jump, and as the density of lilies increases, suddenly the crossing
becomes possible.
We shall then look in some detail at the nature of forces in the physical
world. Our thinking here is formed by the gravitational force causing
the apple to fall from the tree or holding the moon in orbit around the
earth. In a similar way, the electric force binds electrons to the nucleus to
form atoms, and the nuclear force binds protons and neutrons to obtain
nuclei. A force seems to be something like an invisible spring stretched
between two (or more) objects, pulling them towards each other, or—
in the case of two like electric charges—repelling each other. In recent
years, a different kind of force has come into consideration. If a hole is
punched into a tire, the air rushes out, as if pushed by some invisible
force. But there is no specific force on each molecule of outgoing air: only
22. introduction 7
the entire system suffers a drive towards a thermodynamically favored
state of being. In physics terms, the system undergoes a change, from
the compressed and ordered state in the tire to the disordered state of
molecules freely streaming into the surrounding air. Hence the fictitious
agent is denoted as the emergent force, arising only in many-body systems
as a collective phenomenon.
Next we turn to the critical behavior already alluded to several times.
It implies that we know what normal behavior is. In physics, normal is
that a small cause leads to a small effect, that a small change of the con-
ditions leads to a small change of the measurable properties. If we lower
the temperature a little, the density or the structure of the system will
change a little. This is true almost everywhere; but around 100 degrees
centigrade, water evaporates, turns into vapor, and around zero de-
grees, water freezes, becoming ice. This kind of behavior has been called
punctuated equilibrium: for a large range of parameter values, nothing
happens, and then suddenly there is a complete change. In the past cen-
tury, it has become ever clearer that this effect called for a new kind of
physics. The proven wisdom that “natura non facit saltus”—nature does
not make jumps—here is simply wrong: nature does jump. For the math-
ematicians, it meant that smooth, analytical behavior suddenly became
singular. For many years, for both physicists and mathematicians, critical
meant undesirable.
In the classical forms of critical behavior, one always considered sys-
tems whose parameters, temperature, density and the like, were slowly
varied by some outside operator. The Danish physicist, Per Bak, pointed
out that in fact most systems in nature don’t need this outside help—
they evolve on their own towards a critical point. Outside of religion, no
operator triggers an earthquake or a tsunami. As a result, self-organized
criticality has become an important new field of research.
In mathematics, complexity has led to the reconsideration of what we
mean by dimension. For millennia, one, two and three dimensions were
our world, and relativity added time as a fourth. The study of structures
retaining broken patterns at all scales—the famous coastline of Britain or
Norway—led the French mathematician Benoit Mandelbrot to introduce
fractality: there exist structures whose dimensions are not integral, and
may lie between one and two, for example.
23. 8 more than the sum of the parts
Another concept we had to give up was the idea that deterministic
equations lead to unambiguous, unique results. The so-called logistic
map, a recursion relation common to the evolution of populations,
was shown to lead to bifurcation: for certain parameters, the result
oscillated between two values, and these in turn continued to split fur-
ther. Mitchell Feigenbaum showed that this pattern eventually results in
unpredictability, in chaos.
As mentioned, the atomic structure of matter had already been pro-
posed on philosophical grounds in ancient Greece. Two thousand years
later, it was called into question: the idea of atoms could help as a calcu-
lational tool to understand certain regularities (such as the periodic table
of elements), but did atoms really exist, with mass and size? In this sense,
atoms were the quarks of the 19th century. Even some eminent physicists
expressed doubts… The ultimate confirmation came from something
known since antiquity and today called Brownian motion. In perfectly
smooth media, water or air, visible macroscopic particles, pollen or dust,
dance around in an erratic fashion, reflecting the random motion of the
invisible constituents of these media, atoms or molecules.
The scale-independence found in many natural phenomena, leading
for example to the observed Gutenberg–Richter distribution observed
for earthquakes, continues surprisingly—and so far really without any
clear explanation—in unexpected domains. The frequency of words used
in human text of whatever language was also found to follow such a
power-law in usage ranking, and if we partition the positive integers into
their prime number components, the frequency of these components
also obeys such a law. What is the origin of such universality?
The advent of quantum physics brought yet another form of collec-
tive behavior. At extremely low temperatures, in many-body systems,
electrons as the quantum components lead to the formation of a fifth
state of matter, in addition to solids, liquids, gases and plasmas. The new
state, a kind of sleeping liquid (in physics terminology the Bose–Einstein
condensate), gives rise to such striking phenomena as superconductivity
(the vanishing of electrical resistance) and superfluidity (the vanishing
of viscosity for fluids). Again, these are phenomena that only make sense
for collective systems of many constituents—a single constituent cannot
show such properties.
24. introduction 9
In conclusion, there are two distinct approaches to achieving an un-
derstanding of the world around us. In the first and oldest, we identify
its ultimate building blocks, their structure, and the form of their in-
teractions. In the second, we study the general patterns and regularities
that govern the collective behavior of systems of very many such con-
stituents. This second approach is the subject of this book, and we find
that many of these “rules of collective behavior” are highly universal.
In very similar forms, they hold for quarks and electrons, magnets and
stars, insects and birds and much more. They describe the formation
of galaxies, the appearance of turbulence and the evolution of animal
populations. Complexity rules a truly amazing world.
25. 2
The Flow of Time
Fugit irreparabile tempus
(Time flees irretrievably)
Virgil (70–19 bc)
Time Invariance in Physics
Time flows irretrievably from past to future, and we flow with it like a
piece of wood in a stream. Humans have always known that one can-
not reverse its direction. Its course is defined by stellar constellations,
geological structures, fossils, written records, our own lives, our mem-
ories, and more. We know what has happened, but we do not know
what will happen. We can define an order for past, present and future:
there is an obvious historical arrow of time, from the unchangeable to
the unpredictable.
It is therefore quite astonishing that until not so long ago, the founda-
tions of physics were “invariant under time reversal,” in the terminology
of the field. The fundamental axioms, Newton’s laws of mechanics,
Maxwell’s equations for electrodynamics, the equations of Schrödinger
and Dirac for quantum systems, as well as Einstein’s relativity theory:
they all allow time to go backward as well as forward. Given the present
state of things, all these laws predict the future and postdict the past
on equal terms. The time of physics was the ticking of an ideal pendu-
lum: sixty ticks define a minute, which has passed and which will pass.
We are just at one point in an infinite sequence of equal points. So for
centuries, physical science provided no help in assigning a direction to
time. And even Einstein’s combination of space and time in the theory
of relativity was of no help: the fourth coordinate in space-time was still
26. the flow of time 11
forward-backward symmetric and thus evidently not the time that
measured the life of our grandparents. In physics, processes were always
reversible, but in the real world, in our lives, most were irreversible.
And as far as the universe was concerned, it was thought to be eter-
nal and unchanging, the stars above us were the firmament; they were
always there in an unchanging space and thus also were of no help in as-
signing a direction to time… When Einstein noticed that his equations
for the geometry of the universe allowed time to move forward as well
as backward, allowing expansion as well as contraction for the world,
he reluctantly added a term in order to destroy this freedom, to prevent
temporal changes of the universe; it didn’t work… and he subsequently
considered this try as one of his biggest blunders.
The reason was that the cosmological view of time soon changed com-
pletely, when in 1929 the American astronomer Edwin Hubble showed
that distant galaxies are in fact receding from us, that the universe is ex-
panding. This meant, as had in fact been anticipated just before, on the
basis of Einstein’s equations by Georges Lemaitre in Belgium, that it must
have started from a very dense initial state, in the Big Bang, and subse-
quently expanded to form our present world. Lemaitre, a Catholic priest
as well as a physicist, noted that physics had provided a way for creation.
In 1998, it was moreover shown that this expansion is even accelerating.
So today we also have a clear cosmological arrow of time, and it points in
the same direction as the historical one. Our universe once was young
and then grew older, just as we do. Space and time are not equivalent: in
space, displacements are possible in any direction, in time in only one
direction: forward.
Cosmology, human history, as well as most religions, thus all insist
on a well-defined direction of time. Is there a way to also make physics
choose a direction? The philosopher of science, Hans Reichenbach, pro-
grammatically claimed that “there is no other way to solve the problem
of time than the way through physics.” Eventually, this indeed turned out
to be feasible, but it required a new way of thinking, and even then, the
arrow at first seemed to point in the wrong direction, so we have to go
into a little more detail.
27. 12 more than the sum of the parts
Joule’s Experiment
The situation is perhaps understood most readily by looking at an ex-
periment which the British physicist James Prescott Joule carried out
around 1850. Joule was the son of a brewer and later on continued the
brewery himself; so his interest in things like pressure and temperature
were not totally academic. For his experiment, he took a thermally inso-
lated container, divided into two compartments; one contained a gas, the
other nothing—it was as close to a vacuum as possible. When he now re-
moved the dividing wall between the two compartments, the gas rapidly
streamed into the empty part and quickly distributed itself uniformly
over the entire, now larger container (see Fig. 2.1).
Fig. 2.1 Joule’s experiment.
As long as one might wait: the gas as a whole never returned even
briefly into its former compartment. The development, the expansion,
was irreversible, it could not be undone. The arrow of time could not
be turned around. On the other hand, if we were to perform such an
experiment with just a single atom in the initial compartment, this atom
would, after the removal of the dividing wall, be as likely to be in the
old as in the new part of the container. And two atoms would with a
probability of ¼ be back together in the old section, and as well as both
in the new.
The breaking of the time symmetry, of time reversal invariance, must
therefore somehow result from the fact that the gas consists of very
many particles. With an increasing number of players, the simultaneous
turn-about becomes ever more unlikely. We therefore expect that a well-
defined direction of time appears only for systems consisting of many
independent constituents, each moving its own way. It is perhaps the
first indication in physics that the behavior of a many-body system is in
principle conceptually different from that of its individual constituents.
28. the flow of time 13
The whole is indeed more than the sum of its parts, as Aristotle had
noted more than two thousand years before. For a single atom, time does
not matter, does not flow irretrievably; the flow of time is a collective
phenomenon.
This means that God after all does play dice. To illustrate, let’s con-
sider a simpler situation, a case with nine boxes (see Fig. 2.2). Given nine
balls, we then have one possible configuration, with one ball in each box.
If we now double the size of the case to 18 boxes, with again nine balls,
we get 48,620 different possible configurations. Only one of these is the
initial starting case of the nine back in the initial section. If all configu-
rations are equally likely, the chance for a full return to the starting state
is 1:48,620.
Fig. 2.2 Nine balls in nine boxes and one
of the 48,620 possibilities of nine balls in
18 boxes.
A liter of a gas contains not nine, but some 1023
molecules; removing
the divider thus results in a vast increase in the number of possibilities.
The chances for a return are effectively zero: the likelihood that every
particle retraces its proper path is vanishingly small. It appears that the
directed flow of time is a collective property of many particles.
Entropy
The great Austrian physicist Ludwig Boltzmann created the basis for the
physics of many constituents, statistical mechanics, and he turned the ir-
reversible flow of the gas in Joule’s experiment into an axiom of the field.
He first explained that heat has a mechanical origin. It is a measure of the
motion of the constituents of the system, so that heat indeed is a form of
energy: that is the first law of thermodynamics. He then asked how many
29. 14 more than the sum of the parts
distinct configurations, microstates, the particles can form for a fixed to-
tal energy and volume. This gave him a new fundamental quantity of the
field, the entropy S, defined as the logarithmic measure of the number W
of possible microstates, compatible with the given macroscopic informa-
tion, such as energy and volume. A given microstate is specified by the
3N positions and the 3N velocities of all its N constituents at a given mo-
ment, and for fixed energy and volume, there is evidently an immense
number of such states. The position and velocity variables define what
is denoted as the 6N-dimensional phase space, and a given microstate
corresponds to a point in this phase space.
Fig. 2.3 The tomb of Ludwig Boltzmann
in Vienna, showing the definition of
entropy.
(Photo courtesy of Oesterreichische
Zentralbibliothek fuer Physik, Vienne, Austria)
As shown on the tomb of Boltzmann in Vienna (Fig. 2.3), the entropy
is defined as S = k log W. The proportionality constant k, the Boltzmann
constant, together with the speed of light c, the gravitational constant G
30. the flow of time 15
and the Planck constant h, is one of the four basic constants of nature. It
is the only one of the four which deals with many-body systems and has
nothing to do with the interaction of individual particles. It is perhaps
justified to say that (pre-relativity) classical physics rests on the shoulders
of three giants: Isaac Newton for mechanics, James Clerk Maxwell for
electrodynamics and Ludwig Boltzmann for statistical mechanics.
In the above example with the balls, W = 1 and hence S = 0 for the
nine in the initial case, while the doubled case gives S = 4.7 k. For a
constant number of balls, the entropy thus increases with volume size.
And so Boltzmann proposed what is today called the second law of ther-
modynamics: in the evolution of an isolated system the entropy never
decreases; it either increases, or it remains constant if it is already at its
maximum. If we give the medium a chance to increase its entropy by
expanding into a larger volume, it always does so. Hence in the Joule
experiment we have just that: the expansion of a gas at fixed total en-
ergy, at constant temperature, results in an entropy increasing with the
overall volume. Although we have here illustrated the second law of ther-
modynamics for the case of a rather mundane example, it is perhaps the
most profound statement in all of physics. The famous English physicist
and astronomer Sir Arthur Eddington warned his colleagues: If your fa-
vorite theory is contradicted by observation—well, these experimentalists
do bungle things sometimes. But if your theory is found to be against the
second law of thermodynamics, I can give you no hope; there is nothing for
it but to collapse in deepest humiliation.
We saw above that given a larger volume, the very orderly state of
nine balls in nine adjacent boxes leads to a very low entropy and hence
becomes very unlikely. This is in fact a very general feature: the more
ordered a system is, the lower in general is its entropy. So the evolution
in time of a system left to itself tends to decrease its order, increase its
disorder or randomness. We could call this the entropic direction of time.
According to the cosmologists after Lemaitre and Hubble, our uni-
verse has been expanding ever since the Big Bang; as mentioned, this
defines the cosmological arrow of time. According to statistical mechan-
ics, this must also imply an increasing entropy. The very early universe
must then have had a very much lower total entropy than our present
one; quite a number of clever scientists have arrived at this conclusion.
31. 16 more than the sum of the parts
There is a catch, however, as most of them noted. The early universe was,
as far as we know today, a very hot gas of elementary particles, and such
a system is, according to conventional statistical mechanics, already in a
state of maximum entropy. Our present world, with galaxies, stars, plan-
ets and cosmologists, is certainly not at maximum entropy. If we were to
turn everything around us into a gas of free protons and neutrons, the
entropy would considerably increase. The entropy was maximal at the
beginning, but it is not maximal now. Something seems not to match,
the evolution of the universe appears to violate the entropic arrow of
time, as given by statistical mechanics.
Why Is the Entropy of the Universe not Maximal?
Today, we believe to have the solution to this puzzle. The basic idea is due
to the American physicist and cosmologist, David Layzer, of Harvard
University. At the start of the Joule experiment, the gas is in equilib-
rium in the initial volume; its entropy there is maximal for the given
volume. After the removal of the divider, the gas rushes out, expands
and equilibrates, so that some time later it is again in equilibrium, in
the larger volume and with greater entropy. In the intermediate stage,
maximum entropy
Time
Entropy
divider closed divider open
relaxation time
actual entropy
Fig. 2.4 The evolution of entropy in the Joule experiment.
32. the flow of time 17
the entropy rises from the initial (small volume) value toward the larger
(large volume) value (see Fig. 2.4). In this intermediate stage, during the
relaxation time, the system is not in equilibrium and the value of its ac-
tual entropy lies still below the value possible for the larger volume. The
actual entropy is not yet maximal, because the gas particles are preferen-
tially flying in the direction of the new volume, rather than in a direction
orthogonal to this. In this relaxation stage, we thus have a certain or-
der, which is eventually destroyed by collisions between the molecules,
restoring isotropy and equilibrium.
In such a picture, and in fact more generally, order is the difference be-
tween the actual and the maximally possible entropy. It disappears when
the entropy does become maximal—we then have complete disorder.
So order becomes a lack of disorder, not the other way around, as we
might think. But for the given system, disorder, maximum entropy, is
unique, while there are many different kinds of order, from snowflakes
to crystals.
The relaxation time is determined on the one hand by the strength
of the interaction between the molecules: equilibration is the process
in which the constituents eliminate differences through collisions. On
the other hand, it depends on the density of the medium: if it is very
dilute, there are only few collisions. Since expansion causes the density
to decrease, the relaxation time increases. An expanding system is thus
subject to a competition between the relaxation time and the rate of ex-
pansion. If the system expands rapidly enough, relative to the relaxation
time, the actual entropy—although it is continuously increasing—falls
more and more below the maximum possible value. We thus have a situa-
tion in which the entropy of the system continuously increases, in accord
with the second law of thermodynamics, but at the same time, the dif-
ference between actual and maximum entropy is also growing: order is
also continuously increasing, see Fig. 2.5. This scenario is the explana-
tion that David Layzer (Fig. 2.6) suggested around 1975 for the seemingly
opposite directions of the time evolution in cosmology and in statistical
mechanics.
Statistical mechanics requires an increasing entropy, cosmology an
increasing order. We see here that both can happen at the same time,
starting from a state of maximum entropy at the time of the Big Bang.
33. 18 more than the sum of the parts
entropy
big bang now
atoms
stars, galaxies
order
time
actual entropy
maximum
possible entropy
Fig. 2.5 Layzer’s scenario for the evolution of the
Universe.
Fig. 2.6 David Layzer (1925–2019).
(Photo courtesy of Jean Layzer)
34. the flow of time 19
The subsequent expansion of space was too rapid to allow the initial
hot plasma of elementary particles to remain in equilibrium. Thus the
temporal evolution into a larger volume led to an actual entropy which
on one hand increased, and on the other fell more and more behind
the maximum possible, which grew even faster. In this way, both or-
der and disorder increased simultaneously, contrary to our common
experience…
We should note that this picture does not tell us anything about the
kind of order that can arise in the course of time in this evolution. That
depends on the nature of the interactions between the constituents of
the medium in question; we shall return to this issue in a subsequent
chapter.
Before going on, we should note that irreversibility, the existence of
an intrinsic arrow of time in specific physical phenomena, has been
discussed extensively also on a more microscopic level. We have here
addressed the issue in terms of the fundamental laws of nature. Such
a discussion does not include non-equilibrium phenomena such as fric-
tion: the sliding motion of an object stopped by friction can evidently not
be reversed. Similarly, self-organizing events such as the death of living
beings or the decay of a radiative state are also irreversible. In quantum
theory, the interaction of radiation and matter also tends to act as path
erasure. For a discussion of these and similar phenomena, we refer to the
quoted literature.
Further reading
The mentioned scenario for the direction of time is discussed in more
detail in:
• D. Layzer, Cosmogenesis: The Growth of Order in the Universe, Oxford
University Press, 1990
• D. Layzer, The Arrow of Time, Scientific American, 1975
• R. O. Doyle, The Origin of Irreversibility, The Information Philosopher
2014
35. 20 more than the sum of the parts
The role of time in physics, including microscopic mechanisms for
irreversibility, is discussed more generally in:
• P. Frampton, Did Time Begin? Will Time End?, World Scientific, Singa-
pore 2009
• H. Reichenbach, The Direction of Time, U. California, Berkeley 1971
and Dover 1999.
• D. Zeh, The Physical Basis for the Direction of Time, Springer, Berlin
1989.
36. 3
Global Connections
And God said, Let the waters under the heaven be gathered
together unto one place, and let the dry land appear, and it was so.
The Bible, Genesis 1.9
Throwing Coins
The simplest possible model of a many-body system is probably obtained
by randomly throwing identical coins onto a table of area A, allowing to-
tal or partial overlap. In the course of time this will produce on the table
not only isolated single coins, but also clusters of several, partially over-
lapping coins, and the size of these clusters will grow with the number
of deposited coins. At first, the rise is linear: the average cluster size G is
proportional to the number N of thrown coins, G = x N, with an increase
rate specified by the proportionality constant x. To eliminate the specific
table size, we divide both sides by A and thus obtain
g = x n, (3.1)
with n denoting the average density of coins and g the average cluster size
relative to that of the table. Fig. 3.1a shows such distribution. Increasing
the coin density makes it more and more likely that successive coins in-
crease the size of existing clusters, leading to a growing increase of the
average cluster size g, see Fig. 3.1b.
As we continue the game, at a certain specific density nc the addition
of a single further coin will cause the largest cluster to establish a global
connection, a link between opposite sides of the table. Up to this point,
37. 22 more than the sum of the parts
g
(a) (b)
n
Fig. 3.1 (a) early coin distribution; (b) average relative
cluster size g vs. coin density n.
we had islands of coins in a table sea; from now on, we have a multitude
of lakes in a land of coins; the lakes can still contain islands. A small stim-
ulus, one coin, thus leads to a fundamental change of structure, turning
a world of islands in the sea into a world of lakes in a land region; see
Fig. 3.2. For densities below nc, a fish can swim from one side to the
other, but a mouse cannot traverse the table without getting wet. Above
the density nc the situation is reversed. The value n = nc thus constitutes
a critical point of the density. Above and below this value, small changes
have small effects, but at n = nc one world turns into another.
(a) (b)
Fig. 3.2 (a) onset of percolation by the red coin;
(b) configuration above the percolation point.
38. global connections 23
Percolation Theory
In mathematics, the onset of uninterrupted connection between the op-
posite sides of the table is called percolation, since from this point on a
passage become possible (percolare = pass through). Percolation theory
forms a field of mathematics, and it allows a (numerical) determination
of the critical density. One thus obtains
nc =
1.13
πr2
, (3.2)
where r is the radius of the coin, so that πr2
is its area. At the critical
point, the total additive area of all thrown coins thus exceeds that of the
table by some 10%; that is the consequence of the overlap of numerous
coins. As a result, at n = nc about 32% of the table remains empty.
If we now continue throwing coins, eventually the entire table will be
covered; the behavior of g as a function of the density n thus has the form
shown in Fig. 3.3a. Of particular interest here is the rate of increase of the
cluster size. Initially, for a small number of coins, g grows linearly, but
then it begins to grow faster and faster. At the percolation point, a single
coin suffices to create a cluster of the dimension of the whole table. There,
at n = nc, the rate of growth x = (Δg∕Δn) reaches its maximum value,
which becomes infinite in the limit of an infinitely large system (N→∞,
A→∞, n = N/A fixed), see Fig. 3.3b.
g
x =
∆n
∆g
nc
n nc
n
1
(a) (b)
Fig. 3.3 (a) relative cluster size g vs. coin density n;
(b) growth rate x of the cluster size vs. coin density n.
39. 24 more than the sum of the parts
For mathematicians, critical behavior means that some observables di-
verge, i.e. become infinite. Percolation thus becomes a critical process;
close to the critical density, the rate of growth of the cluster size leads to
X =
Δg
Δn
∼
1
|
|
|nc − n|
|
|
α → ∞, (3.3)
Where |x| denotes the absolute value of x; x diverges when we approach
nc from above as well as from below. We thus find a divergence of the
form 1/0, which can be further specified with the help of the so-called
critical exponent α. In the linear region, for small densities n, the cause,
an increase in density, and the effect, the growth of the relative cluster
size g, are comparable. In the critical region, a tiny microscopic cause,
one more coin, produces great macroscopic effects, the appearance of the
global land bridge of coins. In this sense, percolation is indeed a form of
complexity.
At this point, it seems natural to ask how critical behavior can arise
for a system of apparently non-interacting objects, the randomly thrown
coins. Some reflection will show that indeed the size of the coins plays
the role of the interaction: the size of a cluster is crucially determined by
the size of the individual coins. This most clearly seen in equation (3.2),
in which the critical density is seen to be a universal number in terms,
the coin size.
Site Percolation
As mentioned, percolation schemes apply to numerous physical situa-
tions. One quite striking example is given by the spreading of forest fires,
described in detail in the cited book by Stauffer and Aharony. We briefly
sketch the idea here; it is based on site percolation, in contrast to the con-
tinuum case we have considered so far, and more like the game of Go. A
connected cluster is now defined as consisting of adjacent occupied sites
(nearest neighbors only, not diagonal), and percolation occurs if a clus-
ter connects the opposite sides of the lattice. For simplicity, we consider
a plane square lattice, each site of which may or may not be occupied.
40. global connections 25
We denote the probability that a square is occupied by p, so that (1 − p)
applies to empty sites. Actual forests have p < 1. Two typical configura-
tions are illustrated in Fig. 3.4; in Fig. 3.4b, the trees without any nearest
neighbors are marked by black circles.
(a) (b)
Fig. 3.4 Dilute (a) and percolating (b) site configurations
We start from Fig. 3.4a, with the green circles denoting trees; the rest
is empty. Now assume that lightning strikes a tree and sets it on fire;
we mark it red, see Fig. 3.5. To study the result, we consider successive
time steps (“sweeps”) for the lattice, starting with tree x being ignited in
time step 1. If that tree has one or more nearest neighbors, such as the
one marked x, these will also catch fire, while tree x now is burnt out and
marked black (time step 2). In time step 3, the process continues; since
the two neighbors of x do not have any nearest neighbors, they burn out
and that is the end: after time step 3, only three trees are destroyed, the
majority remains intact.
x
t = 1 t = 2 t = 3
Fig. 3.5 The time step evolution of the fire in a dilute forest.
41. 26 more than the sum of the parts
On the other hand, given configuration 3.4b, after sufficiently many
time steps, all but four trees are destroyed (those marked by black circles
in Fig. 3.4b), basically the forest is finished.
It is thus evident that the degree of survival of the forest depends
on whether or not there is site percolation. In the limit of a large
two-dimensional lattice, the critical percolation density is (numerically)
found to be pc = 0.59 + ∕ − 0.01. As a result, we can predict that for
p < pc, the forest fire will end after a finite time, with the bulk of the
forest remaining unharmed. On the other hand, for p > pc, the fire will
continue until essentially the entire forest (except for some isolated trees)
is destroyed.
For completeness we note that besides the two forms considered here,
continuous and site percolation, a third form is defined through bond
percolation. The coins (or equivalent) which were randomly placed in
the continuous case and on the lattice sites for site percolation are now
positioned on the links or bonds of the lattice. This is the only one of
the three which can be solved analytically, and the percolation density is
found to be pc = 0.5.
Further Reading
Percolation was first introduced in
S. R. Broadbent and J. M. Hammersley, Percolation Processes, Mathemat.
Proc. Cambridge Philosophical Society 53 (1957) 629.
The standard introduction and text is perhaps
D. Stauffer and A. Aharony, Percolation Theory, Taylor & Francis, London
1985.
Later and more formal presentations are:
B. Bollobas and O. Riordan, Percolation, Cambridge University Press
2021.
G. Grimmet, Percolation, Springer, Heidelberg 1999
H. Kesten, What is Percolation?, Notices American Math. Society 53
(2006) 57.
42. 4
The Nature of Forces
Ich bin ein Teil von jener Kraft, die stets das Böse will, und stets
das Gute schafft.
(I am a part of that force which always wants evil but always
creates good.)
Johann Wolfgang von Goethe, Faust I
Falling Stones
Among the forces we encounter in nature, gravity is certainly the most
prominent. It defines the weight of our bodies, and it limits how high
we can jump. We do work against it when lifting a stone, and gravity
makes the stone fall back down when we let it go. The realization that
the same force that causes an apple to fall from a tree also holds the earth
in orbit around the sun formed the beginning of modern physics. New-
ton’s law, stating that two bodies of masses m and M are attracted by the
force of gravity inversely proportional to their square of their separation
distance r,
Fg = G
mM
r2
(4.1)
provided us with the prototype picture of forces in nature; the propor-
tionality constant G, the universal gravitational constant, is the same
wherever masses attract each other, in outer space as well as on earth
or in the solar system. The equation (4.1) allows us to calculate the tra-
jectory of cannon balls, the path of the moon around the Earth or that of
the Earth around the sun, and we use it today to determine the power of
43. 28 more than the sum of the parts
rocket engines to reach outer space. The masses mi in the equation (4.1)
represent their resistance against being moved, and there is no small-
est or largest value for such a mass. If in some distant galaxy there is a
star encircled by planets (and there are presumably billions of them), the
motion of these planets is governed by equation (4.1). It is truly universal.
Let us recall at this point that the effect of a force is described by New-
ton’s law, stating that Fm = ma, where a is the acceleration experienced
by the mass m. Combining this with the force of gravity (4.1) we obtain
a = GM∕r2
(4.2)
Galileo Galilei’s celebrated result: the motion of a falling object on Earth
depends only on the mass M of the Earth and its radius r, giving g =
9.8 m∕s2
for the acceleration of gravity. The mass of the object itself
does not enter, everything falls the same way (ignoring air resistance,
of course).
The next interaction to appear in physics was the electric force. If
one suspended two lightweight balls separated by a distance r and then
touched each of them with an amber rod rubbed with a wool cloth, they
repelled each other. Using an amber rod on one and a glass rod on the
other led to attraction. The strength of attraction or repulsion depended
on the amount rubbing on the rod, and it was found to vary as the inverse
square of the separation. The result of this observation was Coulomb’s
law, as electric counterpart of Newton’s law:
Fq = ke
q2q1
r2
, (4.3)
where qi measured the amount of electricity transferred by the touch
of the rod; it could be positive or negative (amber or glass), and the
resulting force is repulsive for like signs and attractive for unlike. It sub-
sequently turned out, however, that there was one big difference between
the two laws: the amount of electricity was given in terms of basic elec-
tric charges. Unlike mass, electricity had a smallest unit, +e or –e, and the
values qi above were multiples of these basic charges. So the electric in-
teraction was given as a fundamental force between elementary electric
charges,
44. the nature of forces 29
Fe = ke
e2e1
r2
(4.4)
The Coulomb constant ke here plays the role of the gravitation con-
stant G in equation (4.1) and is also universal. Equation (4.4) led physics
back to its start in ancient Greece: all matter is made of some small-
est possible units (“atoms”), coupled by universal forces. So it seems
only natural that there is a smallest electric charge. Gravity, on the other
hand, is also universal, but there is no smallest unit. And in contrast to
falling objects, the motion of a massive electric charge does depend on its
mass.
Subsequent studies led to two further forms of force. The nuclear
or strong interaction held nucleons (protons and neutrons) together to
form nuclei. Its range is much shorter than the inverse distance law of the
above two, but again there are smallest possible “charges”—the basic in-
teraction partners are nucleons. And when two protons get close enough
to each other, their nuclear attraction is by many orders of magnitude
stronger than the electric repulsion. In the present theory of strong
interactions, quantum chromodynamics, the nucleons in turn become
composites of quarks, making these the basic entities. The functional
form of the force changes, but the structure of elementary charges in-
teracting by fundamental forces remains. It also does so for the fourth
and last form of interaction, the weak interaction governing radioac-
tive decays. It attributes a basic lepton charge to electrons, muons and
neutrinos.
So if we want to attribute the structure of matter to elementary con-
stituents interacting through fundamental forces, we find that three of
the four interactions so far observed in nature, strong, electromagnetic
and weak, follow this pattern. Gravity, in contrast, does not provide
us with a basic charge. It is also considerably weaker than the others:
the electric repulsion between two protons is by a factor 1042
stronger
than their attraction by gravity. And at still closer distances, the nuclear
attraction is yet many orders of magnitude larger.
This means that on the one hand, our present understanding of grav-
ity does not provide us with a fundamental gravitational “charge,” and
on the other hand, we have no way to test gravity alone on the level
of single elementary particles. We expect that two protons, or a proton
45. 30 more than the sum of the parts
and an antiproton, are attracted to each other by gravity according to
Newton’s law, equation (4.1). But we have no way to test this, since at
short distances their nuclear and at larger distances their electromagnetic
interaction are overwhelmingly larger.
As already mentioned, we have today the “standard model,” unifying
strong and weak nuclear forces with those of electromagnetism, albeit
with many constants. But gravity simply does not seem to fit into such
a scheme. Attempts of resolve this remaining enigma are continuing
through the world, pursued by numerous brilliant scientists, though so
far without success.
Collective Gravity
There is another aspect in which gravity differs basically from all other
forces. A system of many positive (or negative) electric charges is
unstable—repulsion will drive it apart. Nuclear forces are very short-
ranged, so that at very close distances, they can overcome this repulsion
and allow the formation of nuclei, but only up to certain size. Eventu-
ally the repulsion wins, and so the size of nuclei is intrinsically limited.
In a system of positive and negative charges in equal numbers, shield-
ing creates finite size regions of electric neutrality; seen from afar, such
a system is electrically neutral, so that the range of the electromagnetic
force is limited for stable systems.
In contrast, gravity is additively attractive: a large system of many
masses looks from the outside like one big mass; there is no repulsion.
This is particularly relevant for galaxies—cosmic systems consisting of
millions or even billions of stars. The motion of stars in such a galaxy is
governed by Kepler’s rule: they orbit as if they were moving around one
big mass, made up of all the stars further in. In other words, gravity, and
only gravity, does not allow a distinction between a single massive source
or a collection of many individual masses. It is this feature which has led
to the discovery of a striking new form of matter, observed through a
detailed study of the motion of stars in galaxies. It is a form of matter
which until today has remained completely enigmatic—we have no idea
what its origins are.
46. the nature of forces 31
Dark Matter
Initially, Kepler’s rule specified the orbital velocity v of planets around
the sun: it arises by equating the centrifugal force of the orbiting planet
to the gravitational attraction of the sun, giving
v2
=
GM
d
, (4.5)
where M denotes the mass of the sun and d the distance between planet
and sun. In Fig. 4.1 it is seen that the solar system (with the sun so much
heavier than all of the planets) is in good agreement with this rule: the
closer a planet is to the sun, the faster is its orbital velocity and hence
the shorter is its “year.” The figure includes Pluto, which because of its
small size had been removed from the list of planets—today it is listed as
a minor planet…
Mercury
50
40
30
20
10
Venus
Earth
orbital
velocity
(km/s)
Mars
Jupiter
Saturn
2 4
distance from sun(billion km)
6
Uranus Neptune
M S
= 2x 1030
kg
Kepler’s Rule
Pluto
Fig. 4.1 Kepler’s Rule for the solar system.
In the case of the motion of stars in galaxies, instead of planets in the
solar system, the effective mass seen by a given star is that of all other
stars in a sphere around the center of the galaxy and a radius determined
by the distance of the star from the galactic center. The constant mass of
the sun is thus replaced by an increasing total galactic mass as we move
47. 32 more than the sum of the parts
further from the center. The increase comes to an end as we reach the
edge of the galaxy, and from there on, for outlying stars, we expect the 1/d
decrease seen in the solar system—the crucial central mass now remains
constant; it is the mass of all the stars in the galaxy. For the distribution
of the stellar velocities we thus expect the form shown in Fig. 4.2, labeled
Kepler’s rule. Initially, the velocity increases as more and more stars come
into play, and it then decreases as we pass the edge of the galaxy.
This, however, was not at all what one found. Instead, the orbital ve-
locities of outlying stars remained essentially constant, even at very large
distances from the center. What force held them in their respective or-
bits at such velocities? One could estimate the effective mass of all the
stars in the galaxy through the amount of emitted light, and this mass
was not nearly sufficient to account for the motion of the outlying stars.
Back in 1933 the Swiss astrophysicist Fritz Zwicky therefore concluded
that the actual mass of any galaxy must be some five to ten times higher
than its visible mass. His idea was confirmed, largely through the work
of the American astronomers Vera Rubin, Kent Ford and collaborators.
Each galaxy must be embedded in a huge cloud of invisible mass, con-
sisting of unknown dark matter. And the amount of dark matter in this
cloud must increase linearly with the distance from the center, to give
rise to the observed pattern shown in Fig. 4.2.
outlying stars
Kepler’s Rule
orbital
velocity
distance from center
inner stars
Fig. 4.2 Orbital velocities of
stars in a galaxy.
So we encounter yet another enigma of gravity: not only is there no
“smallest charge,” but the universe contains up to ten times more matter
of a completely unknown kind, subject only to gravitation. We can’t see
it and we have no idea from what kind of constituents it arises. Over the
years, the search for dark matter particles has therefore become one of
48. the nature of forces 33
the main topics of high energy physics—so far, without success. At the
European Center for Nuclear Research CERN in Geneva, Switzerland,
this search is continuing in the hope of eventually identifying some new
type elementary particle, subject only to gravity.
Emergent Forces
In view of these special features of gravity, it is perhaps not so surpris-
ing that since some twenty years, another point of view has appeared:
perhaps the force of gravity is in fact by nature different from the oth-
ers. Perhaps gravity is by nature a collective force: perhaps it is not a
fundamental interaction between two individual elementary particles,
but instead arises only such as to maximize the entropy of many-body
systems. We want to follow this track in a subsequent chapter; it was
pioneered some twenty years ago by the Dutch theorist Eric Verlinde.
Let us turn back to Fig. 2.3, where the opening of the divider led to
gas to rush out into the previously empty section—rush out as if driven
by some mystical force. Since the only result of the rush was to increase
the entropy of the medium (the temperature and hence the energy of the
molecules remained constant), we could speak of an entropic force. This
force only appears for many-body systems—an individual atom does not
rush out, it simply wanders around. Hence the outward drive of the gas
is also often referred to as an emergent force—it is not present on an
individual level, but emerges due to the collective effect of many.
We want to look at this in a little more mathematical detail. The first
law of thermodynamics, introducing heat as a form of energy, states that
the total energy E of a system decreases by the work—dW it does and
increases by the amount of heat dQ put into it: dE = –dW + dQ. The
work is in turn fixed by the pressure P exerted to change the volume,
dW = PdV. Since the input of heat also increases the number of possible
microstates, it leads to a growing entropy, with dQ = TdS. The resulting
form of the first law, dE = –PdV + TdS, becomes for a one-dimensional
expansion along the direction x
dE = −PdV + TdS = −Fdx + TdS, (4.6)
49. 34 more than the sum of the parts
since the pressure is the force per area A, so that P = F/A, and dV = Adx.
If the overall change of energy is zero, the constituents of the system
behave as if an entropic force
F = T
(
dS∕dx
)
(4.7)
is pushing them into the new empty section of the container. We should
emphasize the “as if”: there is really nothing pushing on any given
molecule. This is a force of a different nature than the ones considered in
the standard model. It only exists for many-body systems, characterized
by an average kinetic energy per particle, specified by the temperature,
volume and a total number of microstates, counted by the entropy. The
force arises from the change of entropy, given a change of volume at
constant temperature.
In conclusion: we can divide the forces found in nature into two cate-
gories: fundamental forces, acting between pairs of discrete elementary
charges, and emergent forces, arising only through the collective effort
of many constituents in order to increase their entropy.
What Is the Charge of Gravity?
In contrast to the numerous attempts to unify gravity with the other three
fundamental forces, the Dutch physicist Eric Verlinde has pioneered the
formulation of gravity as an emergent entropic force. He suggested that
it does not exist intrinsically, like nuclear or electromagnetic forces, but
that it emerges through the combined effort of many interacting con-
stituents, like pressure or temperature. Matter has a microstructure, it
consists of smallest fundamental units interacting through fundamental
forces. We had seen that the interaction of many such constituents can
lead to an additional effective force due to the inherent drive of the sys-
tem to maximize its entropy. To show that gravity is such a force, one
has to derive Newton’s law of gravitation (or more generally, Einstein’s
general theory of relativity) from the basic laws of thermodynamics. Not
surprisingly, this will require some preliminaries.
Already Max Planck had attempted to devise some smallest-
dimensional unit for gravity, to make up for the absence of such a unit
79. Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookmasss.com