Dictionary of the History of Ideas Studies of Selected Pivotal Ideas |
2 |
3 |
9 |
2 | VI. |
V. |
VI. |
3 | I. |
VI. |
2 | V. |
2 | III. |
3 | III. |
2 | VI. |
1 | VI. |
6 | V. |
3 | V. |
1 | III. |
2 | VII. |
VI. |
1 | VI. |
1 | III. |
III. |
8 | II. |
3 | I. |
2 | I. |
1 | I. |
2 | V. |
1 | VII. |
2 | VI. |
4 | V. |
9 | III. |
4 | III. |
5 | III. |
16 | II. |
2 | I. |
9 | I. |
1 | I. |
1 | VI. |
VII. |
2 | III. |
1 | VII. |
3 | VII. |
2 | VII. |
2 | V. |
VI. |
1 | VI. |
1 | VI. |
1 |
2 | VI. |
2 | VI. |
1 | VII. |
III. |
IV. |
10 | VI. |
VI. |
1 | VI. |
1 | V. |
3 | V. |
4 | V. |
10 | III. |
6 | III. |
2 | VII. |
4 | III. |
I. |
7 | V. |
2 | V. |
2 | VII. |
1 | VI. |
5 | I. |
4 | I. |
7 | I. |
8 | I. | COSMOLOGY SINCE 1850 |
1 | VI. |
12 | III. |
4 | IV. |
4 | III. |
2 | IV. |
1 | IV. |
1 | IV. |
VI. |
1 | VI. |
3 | VI. |
1 | V. |
2 | III. |
1 | VI. |
Dictionary of the History of Ideas | ||
COSMOLOGY SINCE 1850
I. INTRODUCTION
The last forty years of the nineteenth century
were
among the most remarkable in the history of science,
for this was
a period of amazing scientific achievements
and contradictions; on the one
hand classical physics
and astronomy were enjoying some of their
greatest
successes during this period, but at the same time
observational and experimental data, which were ulti-
mately to overthrow the classical laws of physics,
were
slowly being collected. Until the year 1860 physics and
astronomy
were dominated by Newton's concepts of
space and time and by his laws of
mechanics and
gravitation; these seemed sufficient to explain observa
tions ranging all the way from the motion of the planets
to the
behavior of the tides on the earth. The great
eighteenth- and
nineteenth-century mathematicians
such as Euler, Laplace, Lagrange,
Hamilton, and
Gauss had cast the Newtonian laws into beautiful and
magnificent mathematical forms which had their
greatest applications to
celestial mechanics. Astrono-
mers happily
used these techniques to show how excel-
lent
was the agreement between observation and the-
ory. The two domains of physics that still lay outside
the Newtonian
laws—electromagnetism and optics—
were also soon to
be incorporated into a satisfying
theoretical structure. In the year 1865,
James Clerk
Maxwell published his famous papers on his electro-
magnetic theory of light, which
gave a precise and
beautiful mathematical formulation of Faraday's ex-
perimental discoveries, unified
electricity, magnetism,
and optics, and opened up the whole field of electro-
magnetic technology.
Thus, at the end of the first decade of the last forty
years of the
nineteenth century, everything seemed to
fall neatly into place in the
world of science. To the
scientists of that period, the universe appeared
to be
a well ordered arrangement of celestial bodies moving
about in
an infinite expanse of absolute space, and with
all the events in the
universe occurring in a unique
and absolute sequence in time. There was no
question
at that time as to the correctness of this Newtonian
universe
based on the concepts of absolute space and
time; only the observational
and experimental details
were lacking to make the picture complete,
and
everyone was confident that, with improved technol-
ogy, these details would be obtained in time.
This absolute concept of the universe and of the laws
of nature was very
satisfying to the late nine-
teenth-century man, who saw in the orderly and abso-
lute scheme of things the demonstration of the
Divine
Omnipotence which he worshipped and which gave
him the reason
for his existence; moreover, the infini-
tude
of space and time required by the Newtonian
universe was also required by
the concept of an in-
finitely powerful
deity, as described by Alexander
Pope:
A hero perish or a sparrow fall;
Atoms or systems into ruin hurl'd;
And now a bubble burst, and now a world.
(An Essay on Man III. 87-90)
II. DISCREPANCIES IN THE
NEWTONIAN UNIVERSE
But even while this neat, orderly scheme of the
universe was being eagerly
incorporated into Victorian
philosophical and social concepts, its very
basis was
data, and by logical analysis in four different realms
of physics and astronomy: in the realm of optics, the
experiments of Michelson and Morley on the speed of
light were to destroy the Newtonian concepts of abso-
lute space and time and to replace them by the Ein-
steinian space-time concept (the special theory of rela-
tivity); in the realm of radiation, the discoveries of the
properties of the radiation emitted by hot bodies were
to upset the Maxwell wave-theory of light and to
introduce the quantum theory (the photon) with its
wave-particle dualism; in the realm of observational
astronomy, the discrepancy between the deductions
from Newtonian gravitational theory and the observed
motion of Mercury (the advance of its perihelion)
indicated the need for a new gravitational theory
which Einstein produced in 1914 (the general theory
of relativity); finally, in the realm of cosmology, var-
ious theoretical analyses showed that the nine-
teenth-century models of the universe, constructed
with Newtonian gravitational theory and space-time
concepts, were in serious contradiction with stellar
observations.
Although the investigation of each of these depar-
tures from classical physics is of extreme importance
and each
one has an important bearing on the most
recent cosmological theories, we
limit ourselves here
to the cosmological realm and, where necessary in
our
discussion, use the results of modern physics without
concern
about how they were obtained. However,
before we discuss the difficulties
inherent in Newtonian
cosmology, we must consider one other important
nineteenth-century discovery which, at the time,
seemed to have no bearing
on the structure of the
universe but which ultimately played a most
important
role in the development of cosmology. This was the
discovery
of the non-Euclidean geometries by Gauss,
Bolyai, Lobachevsky, Riemann, and
Klein. At the time
that these non-Euclidean geometries were
discovered,
and for many years following, scientists in general
considered them to be no more than mathematical
curiosities, with no
relevance to the structure of the
universe or to the nature of actual
space. Most mathe-
maticians and
scientists simply took it for granted
that the geometry of physical space
is Euclidean and
that the laws of physics must conform to Euclidean
geometry.
This attitude, however, was not universal and Gauss
himself, the spiritual
father of non-Euclidean geometry,
proposed a possible (but in practice,
unrealizable) test
of the flatness of space by measuring the interior
angles
of a large spatial triangle constructed in the neigh-
borhood of the earth. Also, the
mathematician W. K.
Clifford, in The Common Sense of the
Exact Sciences
(1870; reprint, New York, 1946), speculated that the
geometry of actual space might not be Euclidean. He
proposed
the following ideas: (1) that small portions
of space are, in fact, of a
nature analogous to little
hills on a surface which is, on the average,
flat—
namely, that the ordinary laws of geometry are not
valid in them; (2) that this property of being curved
or distorted is
continually being passed on from one
portion of space to another after the
manner of a wave;
(3) that this variation of the curvature of space is
what
really happens in that phenomenon which we call
motion of matter,
whether ponderable or ethereal; (4)
that in the physical world nothing else
takes place but
this variation, subject (possibly) to the laws of con-
tinuity.
Clifford summarized his opinion as follows:
The hypothesis that space is not homaloidal and, again, that
its
geometrical character may change with time may or may
not be destined
to play a great part in the physics of the
future; yet we cannot refuse
to consider them as possible
explanations of physical phenomena because
they may be
opposed to the popular dogmatic belief in the
universality
of certain geometrical axioms—belief which has
arisen from
centuries of indiscriminating worship of the genius of
Euclid.
These were, indeed, prophetic words, for, as we shall
see, in the hands of
Einstein the non-Euclidean geome-
tries
became the very foundation of modern cosmo-
logical theory. But let us first examine the flaws and
difficulties inherent in the Newtonian cosmology of the
nineteenth century.
III. CONTRADICTIONS IN THE
NEWTONIAN COSMOLOGY
We first consider what is now called the Olbers
paradox, a remarkable
conclusion about the appear-
ance of the
night sky deduced by Heinrich Olbers in
1826. Olbers was greatly puzzled by
the fact that the
night sky (when no moon is present) appears as dark
as it does instead of as bright as the sun, which, he
reasoned, is how it
should appear if the basic New-
tonian
concepts of space and time were correct. In
deducing this paradox, Olbers
assumed the universe to
be infinite in extent, with the average density and
the
average luminosity of the stars to be the same every-
where and at all times. He assumed, further, that
space
is Euclidean and that there are no large systematic
movements of
the stars. With these assumptions we
can see, as Olbers did, that each
point of the night
sky should appear as bright as each point of the
surface
of the sun (or any other average star). The reason for
this is
that if the stars were distributed as assumed,
a line directed from our eye
to any point in space
would ultimately hit a star so that the whole sky
should
appear to be covered with stars.
Until quite recently this apparent paradox was taken
as a very strong
argument against an infinite Newtonian
universe (or at least against
Olbers' assumptions) but
E . R. Harrison (1965) has shown that Olbers' conclu-
sions are contrary to the principle of
conservation of
energy. To understand this, we first note that a star
(like the sun) can radiate energy at its present rate for
only a finite
time because only a finite amount of
nuclear fuel is available for this
release of energy. Now
if we assume that stars (or galaxies) are
distributed
everywhere the way we observe them to be in our part
of
the universe, it would take about 1023 years before
the radiation from
these stars would fill the universe
to give the effect deduced by Olbers.
But all stars
would have used up their nuclear fuel long before this
time and their luminosities would have changed drasti-
cally. Thus Olbers' assumption that the luminosities
of
the stars do not change during their lifetimes is not
tenable.
Harrison has shown that the radiation emitted
by stars in a period of about
1010 years (which, on the
basis of modern theories we may take as a
reasonable
estimate of the age of the universe) should give just
about
the kind of night sky we observe.
Although Harrison's analysis of the Olbers paradox
removes this flaw in a
static infinite Newtonian uni-
verse, another
difficulty, first pointed out by Seeliger
in 1895 and also by C. G.
Neumann, still remains. In
a static Newtonian universe (one which is not
expand-
ing), with stars (or galaxies)
extending uniformly out
to infinity, the gravitational force at each point
must
be infinitely large, which is contrary to what we actu-
ally observe. This difficulty with a Newtonian
universe
can be expressed somewhat differently by considering
the
behavior of the elements of matter in it. These
elements could not remain
fixed but would move to-
wards each other so
that the universe could not be
static. In fact, a Newtonian universe can
remain static
only if the density of matter in it is everywhere zero.
To overcome this difficulty Neumann (1895) and
Seeliger (1895) altered
Newton's law of gravity by the
addition of a repulsive term which is very
small for
small distances but becomes very large at large dis-
tances from the observer. In this way a
static, but
modified, Newtonian universe can be constructed.
We may also exclude a Newtonian universe of in-
finite extent in space but containing only a finite
amount of
matter. The principal difficulty with such
a universe is that, in time,
matter would become in-
finitely dispersed or
it would all coalesce into a single
globule—contrary to
observation.
IV. COSMOLOGY AND THE THEORY
OF RELATIVITY
When it became apparent at the end of the nine-
teenth century that pure Newtonian theory (that is,
without the addition of a repulsive term to Newton's
law of
force) could not lead to a static model of the
universe, most scientists
lost interest in the cosmologi-
cal problem
and very little work was done in this field
until the whole subject was
dramatically reopened by
Einstein in 1917, when he published his famous
paper
on relativistic cosmology. New life was suddenly given
to
cosmology by the appearance of this paper, since
it now appeared that the
flaws in Newtonian cosmol-
ogy would be
eliminated with the introduction of the
Einsteinian space-time concept. As
we shall presently
see, this is indeed true, but difficulties still arise
because
a number of different model universes can be obtained
from
general relativity theory, and we are then left
with the problem of
deciding which of these is the
correct model. This is a somewhat
unsatisfactory situa-
tion since one of the
purposes of a theory is to restrict
the theoretical models that can be
deduced from it to
just those that we actually observe in nature; but
in
spite of this drawback, we must turn to the general
theory of
relativity for an understanding of cosmology,
since it is the best theory
of space and time that we
now have and Newtonian theory has certainly
been
disproved. However, before we can discuss relativistic
cosmology
meaningfully, we must understand the basic
concepts of the theory of
relativity itself.
This theory was developed in two stages: the first
(1905) is called the
special or restricted theory of
relativity and the second (1915) is called
the general
theory. The basic feature of the special theory is that
all observers moving with uniform speed in straight
lines relative to the
distant background stars (such
observers are said to be moving in inertial
frames of
reference) are equivalent in the eyes of nature, in the
sense that the laws of nature are the same for all of
them. Put
differently, the special theory states that an
observer in an inertial
frame cannot determine his state
of motion by any kind of experiment (or
observation)
performed entirely in his frame of reference (that is,
without referring to the background stars). Before the
time of Einstein,
this formulation of the special theory
was accepted by physicists only
insofar as it applied
to the laws of Newtonian mechanics. They
believed
that an observer in an inertial frame could not detect
his
uniform motion by means of any mechanical exper-
iment, but they assumed that the principle did not
apply to
optical phenomena and that an inertial ob-
server
could detect his motion through the ether (whose
existence had been postulated to account for the prop-
agation of light) by observing the way light moves
(that
is, by measuring the speed of light) in various directions
in
his frame of reference. Physicists believed this to
be so because the
Newtonian concepts of absolute
space and absolute time lead precisely to
this very
conclusion.
One can deduce from these concepts that the speed
of light is not the same
in all directions, as measured
by a moving observer—the measured
value of the
speed of light should be a maximum for a beam of
light
moving against the motion of the observer and
a minimum for a beam moving
in the same direction
as the observer. This deduction, however, is
contrary
to the experimental evidence. In 1887 Michelson and
Morley
demonstrated experimentally that the speed of
light is the same in all
directions for all inertial ob-
servers. Thus
the constancy of the speed of light for
all such observers must be accepted
as a law of nature.
This means, as emphasized by Einstein, that the
special
theory of relativity must apply to optical phenomena
just as
it does to mechanical phenomena, so that an
observer in an inertial frame
cannot deduce his state
of motion from optical phenomena. Since this is
con-
trary to the deductions from the
Newtonian concepts
of absolute space and absolute time, Einstein
rejected
these absolute Newtonian notions and replaced them
by
relative time and relative space.
To illustrate the essential difference between the two
concepts (absolute
and relative) we may consider two
events separated in space by a certain
distance d and
in time by the interval t as measured by some particular
observer in an inertial
frame. Now, according to the
absolute concepts of Newton, all other inertial ob-
servers recording these two events would find the same
distance d between them and the same interval t. This
is what Einstein denies, for, as we have noted, this
contradicts the observed fact that the speed of light
is the same in all
directions for all observers. This
means that the distance d and the time interval t are
different for observers moving with different speeds,
so that space and
time separately vary as we pass from
one inertial frame to another. The
special theory of
relativity replaces the separate absolute Newtonian
concepts of space and time with a single absolute
space-time concept for
any two events, which is con-
structed as
follows by any observer: Let this observer
measure the distance between
these two events and
square this number to obtain d2. Next, let him measure
the time interval between the two events
and square
this to obtain t2. He now constructs the
numerical
quantity d2-c2t2, where c is the speed of light.
This
quantity, which is called the square of the
space-time
interval between the two events, is the same for all
observers moving in different inertial frames of refer-
ence.
We see from this that the absolute three-dimensional
Newtonian spatial
universe, with its events unfolding
in a unique (absolute) temporal
sequence, is replaced
by a four-dimensional space-time universe in which
the
spatial separation and the time interval between any
two events
vary from observer to observer, but in
which all observers measure the same space-time in-
terval. We may state this somewhat differently by
saying that the universe of the special theory of rela-
tivity is a four-dimensional space-time universe gov-
erned by Euclidean
geometry. The last part of this
statement is important since it is
equivalent to saying
that the square of the space-time interval in a
universe
governed by special relativity is exactly d2-c2t2. In such
a
universe, free bodies (bodies that are not pulled or
pushed by ropes, or
rods, or by some other force) move
in straight lines in space-time.
We must now see how this theory, which is restricted
to observers in
inertial frames of reference, is to be
extended when we introduce
gravitational fields and
observers undergoing any arbitrary kind of
motion
(rotation, linear acceleration, etc.). That the theory as
it
stands (that is, the special theory of relativity) is not
equipped to treat
observers in accelerated frames of
reference or to deal with gravitational
fields can be
seen easily enough if we keep in mind that the special
theory is based on the premiss that all inertial observers
are equal in the
eyes of nature and that there is no
observation, mechanical or optical,
that an inertial
observer can make to indicate how he is moving.
Now it appears at first sight that such a statement
cannot be made about
observers in accelerated frames
of reference since the acceleration causes
objects to
depart from straight line motion. If one is in a train
which is moving at constant speed in a straight line,
objects in the train
behave just as they would if the
train were standing still; thus one can as
easily pour
coffee into a cup when the train is moving with con-
stant speed as when it is at rest. But any
departure
from constant motion (that is, any kind of acceleration)
can
at once be detected, because such things as pouring
liquids from one vessel
into another become extremely
difficult. We should therefore be able to
detect that
we are in an accelerated frame by observing just such
phenomena. It thus appears that inertial frames of
reference and
accelerated frames are not equivalent.
This, then, at first blush, would
seem to eliminate the
possibility of generalizing the theory of relativity.
But
we shall presently see just how Einstein overcame this
difficulty.
That the law of gravity, as stated by Newton, is not
in conformity with the
special theory of relativity, is
evident from the fact that, according to
this theory,
clocks, measuring rods, and masses change when
viewed
from different inertial frames of reference. But,
according to Newton's law
of gravity, the gravitational
force between two bodies is expressed in
terms of the
masses of the bodies and the distance between them
at a
definite instant of time. Hence this force can have
no absolute
meaning—in fact, there is no meaningful
way for an inertial
observer to calculate this force since
masses of the two bodies and the distance between
them. This breakdown of the Newtonian law of gravity,
and the impossibility of incorporating accelerated
frames of reference in the framework of special rela-
tivity, convinced Einstein that a generalization of the
theory of relativity was not only necessary, but possi-
ble. For if it were not possible to generalize the theory,
a whole range of observers and of physical phenomena
related to gravity would not be expressible in terms
of a space-time formulation.
To see how Einstein set about generalizing his the-
ory, we may first note that two apparently unrelated
classes of
phenomena—those arising from accelerations
and those arising
from gravitational fields—are ex-
cluded from the special theory. Einstein therefore
proceeded on the
assumption that these two groups
of phenomena must be treated together and
that a
generalization of the theory of relativity must stem
from some
basic relationship between gravitational
fields and accelerated frames of
reference. This basic
relationship is contained in Einstein's famous
principle
of equivalence, a principle which permits one to state
that
all frames of reference (in a small enough region
of space) are equivalent
and that in such a region there
is no way for an observer to tell whether
he is in an
inertial frame of reference, in an accelerated frame,
or
in a gravitational field. Another way of putting this
is that the principle
of equivalence permits one to use
any kind of coordinate system (frame of
reference) to
express the laws of physics. This means, further, that
no law of physics can contain any reference to any
special coordinate
system, for if a law did contain such
a reference, this in itself could be
used by an observer
to determine the nature of his frame of reference.
Thus
all laws must have the same form in all coordinate
systems.
To understand how the principle of equivalence
leads to the general theory,
we must first see just what
the basis of this principle is and what it
states. The
principle itself stems from Galileo's observation that
all
bodies allowed to fall freely (that is, in a vacuum
with nothing impeding
them) fall with the same speed.
This can be stated somewhat differently if
we consider
the mass of a body (the amount of matter the body
contains). This quantity appears in two places in the
laws of Newtonian
physics. On the one hand, it is the
quantity that determines the inertia of
a body (that
is, the resistance a body offers to a force that tries to
move it). For this reason, the quantity is referred to
as the inertial mass of the body. But the concept of
mass
also appears in Newton's formula that expresses
the gravitational force
that one body exerts on another;
this mass is then referred to as the gravitational mass
of the body. The fact that all
bodies fall with the same
speed in a gravitational field means that the inertial
mass and
the gravitational mass of a body must be
equal.
This remarkable fact had been considered as no more
than a numerical
coincidence before Einstein devel-
oped his
general theory of relativity. Einstein started
out on the assumption that
the equality of the inertial
and gravitational masses of a body is not a
coincidence
but, instead, must have a deep significance. To see what
this significance is, consider the way bodies behave in
an accelerated
frame of reference somewhere in empty
space (far away from any masses) and
the way they
behave in a gravitational field (for example, on the
surface of the earth). Owing to their inertial masses,
all the bodies in
the accelerated system behave as
though they were being pulled opposite to
the direc-
tion of the acceleration and they
all respond in exactly
the same way (that is, they all
“fall” with the same
speed). To Einstein, this meant
that there is no way
to differentiate between an accelerated frame of refer-
ence and a frame that is at rest (or
moving with con-
stant speed) in a
gravitational field. This is called the
principle of equivalence. Another
way of stating it is
to say that the apparent force that a body
experiences
when it is in an accelerated frame of reference is
identical with the force this body would experience
in an appropriate
gravitational field; thus inertial and
gravitational forces are
indistinguishable.
Since the principle of equivalence makes it impossi-
ble to assign any special quality or physical significance
to
inertial frames of reference, the special theory
(which is based on the
assumption that inertial frames
are special in the
sense that only in such frames do
the laws of physics have their correct
and simplest
form) must be discarded for a more general theory
which
puts all frames of reference and all coordinate
systems on the same
footing. In such a theory, the laws
of physics must have the same form in
all coordinate
systems. With this in mind, we can now see how
Einstein
constructed his general theory of relativity.
We begin by noting that the
special theory replaces
the concepts of absolute distance d and absolute time
t between events by a single absolute space-time inter-
val whose square is d2-c2t2. Consider now a
freely
moving particle as viewed by an observer in an inertial
frame
of reference in a region of space where no
gravitational fields are
present. If this particle moves
a distance d in a
time t, the quantity d2-c2t2 must be
the same for all
observers in inertial frames. This simply
means that the natural space-time
path of a free parti-
cle for inertial frames
of reference is a straight line
and that the space-time geometry of the
special theory
of relativity is Euclidean. We may take this
formulation
then as the law of motion (and hence a law of nature)
of a
free particle.
Now if we are to carry out our program of extending
the principle of
relativity to cover observers in gravi-
tational fields and in accelerated frames of reference,
we must say
that this same law of motion (straight line
motion) applies to a body
moving freely in a gravita-
tional field
or in an accelerated frame of reference.
But we know that the space-time
path of a free particle
in a gravitational field (or in a rotating system)
appears
to be anything but straight. How, then, are we to
reconcile
this apparent contradiction? We must re-
define
the concept of a straight line! We are ordinarily
accustomed to think of a
straight line in the Euclidean
sense of straightness, because the geometry
of our
world is very nearly Euclidean and we have been
brought up on
Euclidean geometry. In a sense, we
suffer from the same kind of geometrical
bias concern-
ing space-time as does the man
who thinks the earth
is flat because he cannot detect its sphericity in
his
small patch of ground.
To overcome this parochial attitude, we note that
we can replace the
“straightness” concept by the con-
cept of the shortest distance between two points. We
can now state the law of motion of a free particle as
follows:
A free particle moving between two space-time
points always moves in such a
way that its space-time
path between these two points is shorter than any
other
space-time path that can be drawn between the two
points.
This statement of the law of motion makes no refer-
ence to the way the space-time path of the particle
looks, but
refers only to an absolute property of the
path which has the same meaning
for all observers.
If no gravitational fields or accelerated observers
are
present, the shortest space-time path is d2-c2t2 and the
geometry is
Euclidean. But if gravitational fields are
present, the shortest space-time
path of the particle
(that is, its geodesic) is not given by d2-c2t2,
but by
a different combination of d and t because the space-
time geometry is non-Euclidean. The essence of Ein-
stein's general theory is, then, that a gravitational
field
distorts space-time (it introduces a curvature into
space-time)
and the behavior of a free particle (that
is, the departure from Euclidean
straight-line motion)
is not due to a “gravitational
force” acting on the
particle, but rather to the natural
inclination of the
free particle to move along a geodesic. In a sense,
this
is similar to what happens whan a ball is allowed to
roll freely
on a perfectly smooth piece of ground. The
ball appears to us to move in a
“straight line,” but
we know that this cannot be so
because it is following
the contour of the earth, which is spherical.
Actually
the ball is moving along the shortest path on the
smooth
surface, which is the arc of a great circle.
From this discussion we see that in the general
theory of relativity, the space-time path of a freely
moving
particle is not d2-c2t2, but some variation of
this, which depends on the
kind of gravitational fields
that are present, and on the acceleration of
our coordi-
nate system. We can therefore go
from the special
theory to the general theory of relativity by
replacing
the space-time interval (d2-c2t2) by the quantity gd2-
qc2t2, where g and q are
quantities that vary from point
to point. The value of the quantities g and q at any
point for a
given observer will depend on the intensity
of the gravitational field at
that point and on the
acceleration of the frame of reference of the
observer.
Just as the special theory of relativity is based on the
statement that the quantity d2-c2t2 is the same for all
observers in
inertial frames of reference, the general
theory of relativity is based on
the statement that the
quantity gd2-qc2t2 must be the same for all
observers,
regardless of their frames of reference.
Now the use of the latter expression as the absolute
space-time interval
instead of the former means that
we pass from Euclidean to non-Euclidean
geometry
in going from the special to the general theory, and
the
quantities g and q (they are also
referred to as the
Einstein gravitational potentials) determine by how
much the geometry at any point of space-time departs
from Euclidean
geometry—in other words, these
quantities determine the
curvature of space time at
each point. If, then, we know how to find g and q,
we can determine the
nature of the geometry in any
region of space-time and hence the path of a
free
particle in that region. The curvature of space-time
thus becomes
equivalent to the intensity of the gravi-
tational field, so that the gravitational problem is re-
duced to a problem in non-Euclidean geometry.
The
next step, then, in this development was to set down
the law that
determines the quantities g and q,
and
this Einstein did in his famous field equations—a set
of ten partial differential equations that show just how
the quantities g and q (there are actually ten
such
quantities, but in the gravitational field arising from
a body
like the sun, only two of these ten quantities
are different from zero)
depend on the distribution of
matter. These gravitational field equations
are the basis
of all modern cosmological theories which we shall
now
discuss.
V. THE EINSTEIN STATIC UNIVERSE
The first great step in the development of modern
cosmology was taken by
Einstein in his famous 1917
paper, in which he set out to derive the
physical
properties of the universe by applying his field equa-
tions to the kind of distribution of
matter that one
might reasonably expect to find in the universe as a
whole. Here Einstein had to introduce some simpli-
fying assumptions, since we have detailed knowledge
small region of space (within a few thousand light years
of our own solar system) and we find that the matter
here is concentrated in lumps (the stars) with some dust
and gas between the lumps. Einstein therefore intro-
duced the cosmological principle, which states that,
except for local irregularities, the universe has the same
aspect (the same density of matter) as seen from any
point. This means that what we see in our region of
the universe is pictured as being repeated everywhere,
like a wall-paper or linoleum pattern.
Einstein next replaced the lumpiness of the distribu-
tion of matter (as indicated in the existence of
stars
and galaxies) by a smooth, uniform distribution which
we may
obtain by picturing all the matter in the stars
as smeared out to fill
space with a fog of uniform
density (actually a proton gas with a few
protons per
cubic foot of space). Einstein made one other assump-
tion—that the universe is
static; that is, that the density
of matter does not change with time and
that there
are no large scale motions in the universe. At the time
that Einstein did this work, this assumption appeared
to be eminently
justified because the recession of the
distant galaxies had not yet been
discovered and the
stars in our own neighborhood of space were known
to be moving with fairly small random velocities. With
these assumptions,
Einstein still had to make one im-
portant
extrapolation—he had to extend his field equa-
tions to make them applicable to the entire universe
and not just to a small region of empty space around
the sun.
It is useful here (as a guide in our discussion) to
write down Einstein's
field equations in the form in
which Einstein first used them in his study
of cos-
mology:
This equation really
represents ten distinct equations
since the quantities Rij, gij, and Tij are components
of three
different tensors, and there are just ten such
distinct components in each
of these tensors. The tensor
components Rij, which
are constructed in a well-
defined way from
the potentials gij (which are also
called the
components of the metric tensor) determine
the nature of the space-time
geometry. The quantity
R gives the curvature of space-time at any specific
point, and the tensor Tij is the matter-energy-
momentum-pressure tensor. G is the universal gravita-
tional constant and c is the speed of light. This set
of ten equations
thus tells us how the matter and energy
that are present determine the
metric tensor gij at each
point of space-time and
therefore the geometry at each
such point. To determine the potentials gij and hence
the geometry of space time, one must thus solve
the ten field
equations for the known or assumed dis-
tribution of matter and energy as given by the ten-
sor
Tij.
In the case of planetary motion, one simply places
Tij = 0; this leads to Einstein's law of gravity for
empty
space
which reduces to Newton's law for weak gravitational
fields. But for the cosmological problem, Einstein
placed Tij equal to a constant value (the average den-
sity of matter at each point) and then sought to solve
the field equations (1) under these conditions. In other
words, he attempted to obtain the potentials gij from
equations (1) under the assumption that there is a
constant (but very small) density of matter throughout
the universe. His idea was that this small density would
introduce a constant curvature of space-time at each
point so that the universe would be curved as a whole.
This initial attempt to obtain a static model of the
universe was unsuccessful, however, because the equa-
tions (1) lead to a unique set of potentials gij only if
one knows the values of these quantities at infinity. Now
the natural procedure in this kind of analysis is to
assume that all the values of gij are zero at infinity,
but this cannot be done if one keeps the equations (1)
and also retains the assumption that the density in the
universe is everywhere the same. In fact, the values
of gij become infinite at infinity under these conditions,
so that the equation (1) can give no static model of
the universe.
This very disturbing development forced Einstein to
alter his field
equations (which he did very reluctantly)
by introducing an additional term
on the left-hand side.
Fortunately, the field equations (1) are such that
this
can be done, for it is clear that the character of these
equations is not changed when one adds to the left
hand side a second order
tensor which obeys the same
conservation principle (it must represent a
quantity
that can neither be destroyed nor created) as the other
two
terms together. Now it can be shown (as Einstein
knew) that the only
physical term that has this impor-
tant
property is λgij, where
λ is a universal constant.
Hence Einstein enlarged his field
equations by the
addition of just this term and replaced (1) by the fol-
lowing most general set of field equations:
Rij
– 1/2
Rgij + λgij = (8π / c4)GTij. (2)
These are now the basic equations of cosmology.
Before discussing the various cosmological models
that can be deduced from
these equations, we should
say a few more words about the famous constant
λ
which has become known in scientific literature as the
constant was introduced that it has an exceedingly
small numerical value as compared to the terms in (2)
that give rise to the ordinary gravitational forces. For
if this were not so, the term λgij would destroy the
agreement between the observed motions of the planets
(that is, the motion of Mercury) and those predicted
by (2). It turns out, in fact, as we shall see, that the
square root of λ (for the static closed universe that
Einstein first obtained) is the reciprocal of the radius
of the universe. Finally, we note that the term λgij
in (2) behaves like a repulsion—in empty space it has
the opposite sign of the gravitational term and hence
opposes gravitational attraction. A curious thing about
it, however, is that the repulsion of an object increases
with its distance from any observer and is the same
for all objects (regardless of mass) at that distance.
With the inclusion of the cosmical term gij in his
field equations, Einstein was able to derive a static,
finite model of the
universe. In a sense, we can under-
stand
this result in the following way: the small amount
of matter in each until
volume of space introduces the
same curvature everywhere, so that space
bends uni-
formly, ultimately curving back
upon itself to form a
closed spherical universe. If there were no
cosmical
repulsion term, the gravitational force of all the matter
would cause this bubble with a three dimensional sur-
face to collapse. But the cosmical term prevents this;
in fact,
the cosmical repulsion and the gravitational
contraction just balance each
other to give a static
unchanging universe. An interesting property of
this
universe is that it is completely filled; that is, it is as
tightly filled with matter as it can be without changing.
For if we were to
add a bit of matter to it, the gravita-
tional attraction would outweigh the cosmic repulsion
and the
universe would shrink to a smaller size, which
would be just right for the
new amount of matter (again
completely filled). If we remove a bit of
matter, the
universe would expand to a slightly larger size, but
it
would again be completely filled.
Now it may seem that such a completely filled uni-
verse must be jam-packed with matter like a solid, or
like the
nucleus of an atom, but this is not so. In fact,
the density of matter in
such a universe depends on
its radius (that is, its size) and its total
mass. Einstein
found the radius of such a static universe to be about
30 billion light years, with a total mass of about 2 ×
1055 grams.
This would lead to a density of about 10-29
gm/cm3, or about one proton per
hundred thousand
cubic centimeters of space. We see that this is a
quite
empty universe, even though it is as full as it can be!
Before we see why the static Einstein universe had
to be abandoned, we must
try to explain more precisely
the meaning of spherical space. When we speak
of the
universe as we have up to now, we mean the four-
dimensional space-time universe,
but the curvature we
have been referring to is the curvature of the
actual
three-dimensional physical space of our existence. To
understand this, we may picture the physical space of
the universe as the
surface of a rubber balloon and
all the matter (that is, the galaxies) is
to be distributed
over this surface in the form of little specks. Note
that
the physical three-dimensional space of the universe
is the
surface of the balloon, not the whole balloon
itself. Of course, the
surface of a real balloon is two-
dimensional, so that we have lost one dimension in this
picture, but
that does not affect the picture seriously.
The spatial distances of, or
separations among galaxies
are now to be measured along the surface of the
bal-
loon (just as the distance between New
York and
Chicago is measured along the surface of the earth).
With this picture, we thus establish an analogy be-
tween the three-dimensional space of our universe and
the
two-dimensional surface of a sphere like the earth.
The analogy can be made
complete by supposing that
the inhabitants of the earth are capable of only
a
two-dimensional perception (along the surface of the
earth) so that
they know nothing about up or down
and hence cannot perceive that the
earth's surface is
curved in a space of higher dimensions (the three
dimensions of actual space). Even though we, as actual
three-dimensional
creatures, can assign a radius of
curvature to the surface of the earth
(the distance of
the surface of the earth from its center) the two-
dimensional inhabitants of the earth
would find such
a concept difficult to perceive or accept.
To carry this over to the three-dimensional space
of the universe, we must
picture the curvature of this
three-dimensional space as occurring in a
space of
higher dimensions. The radius of the universe is thus
a
distance (actually a number) associated with a direc-
tion at right angles to the three-dimensional curved
surface of
the universe, and hence into a fourth dimen-
sion. In this type of universe, every point is similar
to every
other point and no point of this curved surface
can be taken as the center
of space; in fact, there is
no center, just as there is no center on the
surface of
the earth. The center of the universe, if we can speak
of
it at all, is in the fourth dimension.
VI. THE DE SITTER EMPTY
EXPANDING UNIVERSE
When Einstein first obtained his static universe the-
ory, it seemed to be just what was wanted, for it agreed
with the
astronomical observations as they were known
in 1917. The measured
velocities of the stars were
small, and the large scale speed of recession
of the
distant galaxies had not yet been detected. It thus
over, it appeared to Einstein at the time that the
solution of the field equations he had obtained with
the introduction of the cosmical constant λgij was a
logical necessity which intimately linked up space and
matter, so that one could not exist without the other.
He was led to this opinion because he thought that
the field equations (2) with a positive value of λ have
no solution for Tij = 0 (that is, in the absence of mat-
ter). But, as de Sitter (1917) later showed, this con-
clusion was wrong. He found a solution for empty
space; that is, for Tij = 0 everywhere. Now such a
universe is an expanding one in the sense that if a test
particle (a particle of negligible mass) is placed at any
point in the universe, it recedes from the observer with
ever increasing speed. In other words, the speed of
recession increases with distance from the observer. In
fact, if the de Sitter universe had test particles distrib-
uted throughout, they would all recede from each
other. The reason for this is found in the cosmical term
λgij in the field equations. If we place Tij = 0 in the
field equations (2) they reduce to
Rij = λgij, or Rij - λgij = 0, (1)
and since the term Rij represents the ordinary New-
tonian gravitation of attraction, the term -λgij repre-
sents repulsion, owing to the minus sign.
The de Sitter universe aroused interest initially be-
cause it showed that the cosmological field equations
(2) do not
have a unique solution, and that more than
one model of a universe based on
these equations can
be constructed. Beyond this, however, the de
Sitter
model of the universe was not taken seriously, since
it seemed
to contradict the observations in two re-
spects: it is an empty universe, whereas the actual
universe
contains matter; it is an expanding universe,
whereas the observations
seemed to indicate that the
actual universe was static. But then, in the
early 1920's,
the recession of the distant nebulae was discovered by
Hubble, Slipher, Shapley, and others. The work of
these investigators on
the Doppler displacement (to-
wards the red) of
the spectral lines of the extragalactic
nebulae (or galaxies) indicates
that the universe is, in-
deed, expanding.
Moreover, the rate of recession of
the galaxies increases with distance
(the famous Hubble
law, 1927) in line with what one would expect from
the de Sitter universe. These discoveries demonstrated
the inadequacies of
the Einstein universe and brought
the de Sitter model into prominence.
Another difficulty associated with the Einstein static
universe is that it
is not a stable model but must un-
dergo either
expansion or contraction if there is the
slightest departure from the
precise balance between
the gravitational attraction and the cosmic repulsion.
Thus, if by some process or other some of the mass
were to be
changed into energy, or if condensations
were to occur, the universe would
have to begin to
expand or collapse. This point, taken together with
de
Sitter's work and the observed recession of the distant
galaxies,
led cosmologists to the idea that the actual
model of the universe might be
an expanding one, that
is, intermediate between the empty de Sitter
model
and the Einstein static model. One must therefore look
for
solutions of the field equations which give models
that are expanding, but
not empty. Such models were
first obtained by the Russian mathematician
Friedmann
in 1922 when he dropped Einstein's assumption that
the
density of matter in the universe must remain
constant. By dropping this
assumption, Friedmann was
able to obtain nonstatic solutions of the field
equations
which are the basis of most cosmological models. This
same
problem was independently investigated later by
Weyl (1923),
Lemaître (1931), Eddington (1933),
Robertson (1935), and Walker
(1936). Since the treat-
ment of this problem
as given by Robertson, and,
independently, by Walker, is the most general
one, we
shall use their analysis as a guide in our discussion of
the
current models of the expanding universe.
VII. THE NONSTATIC MODELS OF A
NONEMPTY UNIVERSE
In the previous section we saw that an expanding
model of the universe can
be obtained without altering
Einstein's original assumptions if we remove
all the
matter from the universe and, at the same time, intro-
duce into the field equations a cosmical
repulsion term.
Friedmann escaped this unrealistic situation by re-
moving Einstein's assumption that there are
no large
scale motions in the universe. He assumed immediately
that
the average distance between bodies in the uni-
verse does not remain constant but changes steadily
with time. This
means that the right hand side of the
field equations (2) does not remain
constant, so that
the density of matter in the universe changes with
time.
Owing to this variation of density it is not necessary
to keep
the cosmical term λgij in
the left hand side
of (2) to obtain nonstatic solutions; in fact,
Friedmann
discarded this term in his work and obtained two
nonstatic
models of the universe—one which represents
a universe that
expands forever, and the other a pul-
sating
universe. In the investigations that followed the
work of Friedmann, the
general field equations (2) with
λgij present, and with the right hand side
changing
with time, were used. This introduces a whole range
of
expanding and pulsating models whose properties
depend on whether
λ is negative, positive, or zero, and
on the value of still
another constant (the curvature
constant) which also enters into the final
solution of
To see how these two constants determine the vari-
ous models of the universe, we first consider briefly
the manner in
which Robertson and Walker repre-
sented the
solution of the field equations for a nonstatic
universe. We first recall,
according to what we said
in Section IV, that the square of the space-time
interval
between two events for an unaccelerated observer in
empty
space is d2-c2t2, and we have Euclidean space.
The presence of matter alters this
by distorting space
and changing the geometry from Euclidean to non-
Euclidean. Suppose now that the two
events we are
talking about are close together (so that d and t are
small) and that they are both at
about the same distance
r from us. We then find (following Robertson and
Walker) that the space-time interval between these
events for an expanding
universe with matter in it can
always be written as
R2d2 / (1 +
kr2/4)2 - c2t2
, (2)
where R is a
quantity that changes with time and k
is the
curvature constant referred to above; it can have
one of the three values:
-1, 0, +1. If k = -1, the
curvature of the universe
is negative (like a saddle
surface) and the geometry is hyperbolic. The
universe
is then open and infinite. If k = 0, the
curvature is
zero and space is flat (Euclidean); the universe is open
and infinite. If k = +1, the curvature is positive
and
the universe is finite and closed. The quantity R is the
scale factor of the universe; it measures the expansion
(or contraction) and is often referred to as the radius
of the universe.
However, it is not in itself a physical
distance that can be observed or
measured directly,
but rather the quantity that shows how the
distances
between objects in the real universe change; if, in a
given
time, R(t) doubles, all distances
and dimensions
in the universe double.
To obtain a model of the universe, one must find
the law that tells us how
R varies with time, and this
is done by using
the field equations (2) in conjunction
with the above expression for the
space-time interval.
When we do this, we obtain the equations that
tell
us exactly how R changes with time, but we find
that
these equations also contain the cosmic constant λ and
the curvature constant k so that many different
models
of the universe are possible, depending on the choice
of these
constants. Before Friedmann and those follow-
ing him did their work, it was thought that λ neces-
sarily had to be positive, but the
equations for R show
that we can obtain models of
the universe for which
λ can be negative, zero, or positive. If
we combine
these three possibilities for λ with the three possible
values (-1, 0, +1) for k, we obtain a large
variety
of model universes, and there is no way for us, at the
present
time, to say with certainty which of these
models give the correct
description of the universe.
Owing to this uncertainty we shall give a brief dis-
cussion of these models as a group and then see which
of these
is most favored by the observational evidence.
We designate a model
universe as either expanding or
oscillating (pulsating) depending on
whether R
in-
and then decreases. In the expanding models, two cases
are possible, depending on the choice of λ and k. In
the first case (expanding I), R increases from a zero
value, at a certain initial time, to an infinitely large
value, after an infinite time. In the second case (ex-
panding II), R increases from some finite value, at a
certain initial time, to an infinite value, after an infinite
time. In all the oscillating models, R expands from
zero to a maximum value and then decreases to zero
again. This fluctuation is then repeated over and over
again. In Figure 1 graphs are shown giving the varia-
tion of R with time for the expanding and oscillating
cases.
We summarize the various model universes in
Table I.
TABLE I
λ | k (or curvature) | ||
-1 | 0 | +1 | |
negative | oscillating | oscillating | oscillating |
zero | expanding I | expanding I | oscillating |
positive | expanding I | expanding I | oscillating |
expanding I | |||
expanding II |
VIII. MODEL UNIVERSES WITH THE
COSMICAL CONSTANT
EQUAL TO ZERO
We have seen that the Einstein field equations lead
to both expanding and
oscillating models of the uni-
verse, but these
field equations do not permit us to
determine which one of the eleven
models listed in
Table I corresponds to the actual universe. The
reason
for this is that three unknowns, viz., the cosmical con-
stant λ, the sign of the
curvature k, and the scale of
the universe (the
units in which R and the time are
to be expressed)
appear in the final solutions, whereas
direct observations of the galaxies
can give us only the
rate of expansion of the universe (Hubble's law)
and
its average density. Another possible observation is the
deceleration of the expansion of the universe, and some
work has been done
on that possibility which we shall
discuss later. If the deceleration could
be measured
accurately, we could decide among the various models,
but
until we have reliable observational evidence on
this point, we must
proceed by making some assump-
tion about
either λ or k.
For the time being, we proceed as Einstein did after
Friedmann's work and
place λ = 0. Einstein was very
unhappy about the introduction of
λ in the first place
since he considered it to be an ad hoc modification
of the
theory which spoiled “its logical simplicity”;
he
therefore felt that the models with λ = 0 were the
ones to be
favored. From Table I we see that λ = 0
leads to two expanding
models of type I for k = -1
and k = 0, and to a single oscillating model for k
> 0.
To decide between the expanding and oscillating
models, we must have the equation that tells us just
how k depends on the density of the universe and its
rate of
expansion when λ = 0. This relationship, which
is obtained from
the solution of the field equations,
is the following:
k = R2/c2
(8/3πGρ - H2
, (3)
where G is the gravitational constant, c is the speed
of light, ρ is the average
density of the universe, and
H is Hubble's constant—that is, the rate of
expansion
of the universe.
The important quantity in equation (3) is that con-
tained in the parenthesis on the right hand side; for
it
determines whether k is negative, zero, or positive,
and hence whether the universe is expanding or oscil-
lating. If we express distance in centimeters, mass
in
grams, and time in seconds, the quantity (8/3)πG equals
5.58 × 10-7
and the parenthesis in (3) becomes (5.58 ×
10-7 ρ - H2). If we knew ρ and
H accurately, we could
see at once from this
expression whether our universe
(with λ = 0) is expanding or
oscillating, but neither
ρ nor H is well
known. Hubble was the first to measure
H by analyzing the recession of the galaxies and
placed
it equal to 550 km per sec per million parsecs; but
we now know
that this is too large. According to A.
Sandage (1961), observations on the
recession of the
galaxies indicate that H is about
100 km per sec per
million parsecs. If we use this value, H2 becomes (in
cm-gm-sec units) 9 × 10-36 and the
quantity in the
critical parenthesis becomes (5.58 × 10-7 ρ - 9 ×
10-36) or 5.58
× 10-7 (ρ - 1.61 × 10-29).
This is a most remarkable result, for it tells us that
the model of the
universe (for a given value of the
recession) is determined by the density
of matter in
the universe. In our particular case (the cosmical con-
stant zero) the density ρ must be
larger than 1.61 ×
10-29 gms per cc (one
proton per 100,000 cubic cm.
of space) for the universe to be an
oscillating one. If
the density just equals this value, the universe is
ex-
panding and Euclidean (no curvature),
and if the den-
sity is less than this value,
the universe is expanding
but it has negative curvature. It is precisely
here that
we run into difficulty in drawing a definite conclusion
because the density ρ is not accurately known.
In terms of our present data, the density appears
to be about 7 ×
10-31, which would make k = -1,
tive curvature). But there may be great quantities of
undetected matter that can increase ρ considerably.
One must therefore try to get other observational
evidence which can permit us to decide between ex-
panding and oscillating models. This can be done if
one determines (from observational evidence) whether
the Hubble constant H is changing with time, and, if
so, how rapidly. If the value of H, as determined from
the recession of nearby galaxies, is sufficiently smaller
than the value as determined from the recession data
of the distant galaxies, we must conclude that H was
considerably larger when the universe was younger (the
distant galaxies show us a younger universe) than it
is now. This would mean that the rate of expansion
had decreased and that ultimately the universe must
stop expanding and begin to collapse. This means that
the universe is oscillating. This sort of analysis has been
carried out jointly by Humason, Mayall, and Sandage
(1956) and the evidence favors an oscillating universe.
This means either that the value of the density ρ has
been greatly understimated or that the correct model
of the universe is one in which λ is different from zero.
Of course, it may be that H is even smaller than 100
km per sec per M pc, but it cannot be much smaller
than this value, and reducing H by a small amount
does not help.
Before leaving these Friedmann models with λ = 0,
we briefly
consider the principal properties of the
models associated with the three
different values of
k. For k = 0 there is no curvature
and space is infinite.
The age of the universe (as measured from some
initial
moment t = 0 when the expansion began) is
then equal
to 2/3(1/H), and we obtain about 8
× 109 years, which
appears to be too small to account for the
evolution
of the stars and galaxies. For this kind of universe, the
expansion parameter R increases as the 2/3 power of
the time.
For k = -1, space is negatively curved and infinite;
the expansion is continuous and endless, so that the
universe finally
becomes completely empty and Eu-
clidean. At
some initial moment, t = 0, the universe
was in an
infinitely condensed state and then began
to expand. According to this
model, the age of the
universe is 1/H or 1.2
× 1010 years, which gives ample
time for stellar evolution.
For k = +1, we obtain the oscillating universe
which
began from an infinitely condensed state at
t = 0. This is a positively curved, closed universe,
whose radius R will reach a maximum value and then
decrease down to zero again. A similar expansion will
then begin again and
this will be repeated ad infinitum.
The age of this model of the universe
is smaller than
that of the other two.
IX. MODELS OF THE UNIVERSE
WITH THE COSMICAL
CONSTANT
DIFFERENT FROM ZERO
We saw in the last section that placing λ = 0 se-
verely restricts the number of models, and that these
models represent ages that are somewhat too small for
stellar evolutionary
comfort. For this reason, a group
of investigators, particularly
Lemaître, Eddington,
Robertson, Tolman, and McVittie, in the early
days (all
independently of each other and without knowledge
of
Friedmann's work) and Gamow (1946) later, con-
structed various models with λ different from zero.
There are many more such models than one can obtain
with λ = 0,
and among them are both the expanding
and oscillating types, as we have
already noted. The
most popular of these models during the earlier
period
of this work is the one first proposed by Lemaître in
1927 and strongly supported by Eddington. This is the
expanding II model
listed in Table I, for which both
λ and k
are positive. In this model the universe is
always closed and finite and
began its expansion from
some finite nonzero value of R. But the moment of
the beginning of the expansion was not the
moment
of zero time (that is, the moment of the origin of the
universe) because in this model the universe could have
remained in a
nonexpanding, static state for as long
as one might desire—in
fact, for an infinite time in
the past.
Since this model starts expanding from a static
model, both
Lemaître and Eddington assumed this
initial static model to be the
original Einstein static
model. In this model the value of λ and
the radius R
are chosen (in relationship to the mass
M of the uni-
verse)
in such a way as to give a closed spherical
universe in which the cosmical
repulsion is just bal-
anced by the
gravitational attraction. However, this
Einstein universe is unstable, as
we have already noted,
so that any initial expansion reduces the density
and
causes this model to expand still more, with further
reduction in
density, and so on. The expansion thus
proceeds faster and faster until the
universe is infinitely
expanded and the density is everywhere zero. On
the
other hand, a slight compression could have caused the
Einstein
model to have contracted indefinitely, finally
ending up as an infinitely
condensed point of matter.
If, then, we accept this Lemaître-Eddington picture,
the universe
was in a static Einstein state for an infinite
time in the past and then at
some finite time in the
past, for some unknown reason, began to expand,
at-
taining its present rate of expansion
after a few billion
years. Although Eddington never abandoned this con-
cept and fought for it vigorously to the end
of his life,
Lemaître revised his thinking in 1931 and replaced
this
type II expanding model by a type I expanding model.
the universe can be constructed with λ positive and
k = 1: an oscillating type, an expanding I type, and
an expanding II type. If we reject the last of these
(which corresponds to the original Lemaître-Eddington
model, which we have just discussed) we still have the
oscillating and the expanding I models.
The reason Lemaître replaced the expanding II
model by the
expanding I model is that he had no
reasonable explanation for the start of
the initial ex-
pansion of the actual universe
from an Einstein static
state. Although his own theoretical investigations
and
those of McCrea and McVittie (1931) strongly sug-
gested that any local condensation of the matter in
the Einstein static universe (for example, the formation
of a single galaxy
or star) would cause it to start ex-
panding,
these investigations left unanswered the
question as to why other galaxies
were formed. If
expansion began after the formation of a single
galaxy,
the density of the universe would immediately begin
to
decrease and other condensations into galaxies would
be precluded. This
would mean, of course, that the
cosmological principle defined in Section V
would be
untenable, since the distribution of matter in the
neighborhood of this initial condensation would be
different from that
elsewhere in the universe. More-
over, it is
difficult to see how the heavy elements such
as iron, lead, and uranium
could have originated in
an Einstein static-state universe, since we know
from
nuclear theory that the formation of such elements
from hydrogen
in great abundance requires extremely
high temperatures and pressures. This
means that the
entire universe, or at least parts of it, must have
passed
through a high temperature-high pressure phase. Thus
the very
existence of the stars and heavy elements
argues against the Einstein
static state as the initial
phase of our present universe.
Owing to these difficulties, inherent in the assump-
tion that our present universe evolved from an Einstein
static
universe of finite radius, Lemaître introduced the
assumption that
we live in an expanding universe of
type I, which began its expansion from
a highly con-
densed state. He referred to
this initial condensation
as the primordial atom or nucleus and assumed
that
a vast, radioactive explosion occurred in this atom and
that what
we now see in the recession of the galaxies
all about is the result of this
explosion. In this picture,
the expanding universe is always finite in
size, but
closed like a sphere. The initial condensed state (that
is,
the Lemaître primordial atom) may be pictured as
having been
present for an infinite time in the past
or we may suppose that the
universe began its life in
the Einstein static state and then collapsed
violently
into a primordial atom from which it began to expand.
According to Lemaître, this expansion carried the uni-
verse back to its initial Einstein state,
but it did not
stop there. Its velocity of expansion carried it beyond
this static phase, and after that its expansion proceeded
with ever
increasing speed.
Whether we are discussing an Einstein-Friedmann
expanding model, with
λ = 0; or an oscillating model,
with λ = 0; or a
Lemaître model, with λ > 0 and
k = +1 (expanding II or oscillating), we are dealing
with a group of models that are referred to as the “big
bang” models of the universe, since all of them picture
the
universe as having originated explosively from a
point. The term
“big bang” was first introduced by
Gamow (1948) who,
together with Alpher and Herman
(1950), sought to account for the origin of
the heavy
elements by supposing that they were formed from the
original protons and neutrons in the very early and
very hot stage of the
explosion. According to this
picture of the origin of the universe,
neutrons were
the principal components of the original material
ejected from the primordial atom or point source. Some
of these neutrons
quickly decayed into protons and
electrons, and these protons then captured
other neu-
trons to build up the heavy
elements. This whole
buildup of heavy elements must have occurred
during
the first thirty minutes after the initial explosion, for
the
temperature of this primordial material dropped
very rapidly after that and
everything then remained
frozen.
Gamow's theory was very appealing at first since
no other theory of the
elements was available then;
the theory of stellar structure and evolution
had not
yet reached a point of development where it could
be shown
that heavy elements can be and are built
up inside stars, as they evolve
from structures like the
sun into red giants like Antares and Betelgeuse,
with
their internal temperatures rising to billions of degrees.
Gamow's theory of the buildup of the heavy elements
during the first thirty
minutes of the life of the universe
had to be discarded, however, since
there are no stable
nuclei of atomic masses 5 and 8, so that neutron cap-
ture alone could not have bridged the nuclear
gap
between the light and heavy nuclei. Even if some heavy
nuclei were
formed by neutron capture in this early
fireball stage of the universe (and
all nuclei capture
neutrons very readily) a half hour would hardly
have
been long enough for the heavy elements to have been
formed in
their present abundances. Since we now
know that the heavy elements can all
be baked in the
stellar furnaces at various stages of evolution, this
phase
of the Gamow “big bang” theory is not essential
and
one can discard it without invalidating the overall
concept.
If we then accept this Lemaître-Gamow hot “big
through a very high temperature phase (about 1010 to
1011 degrees K) soon after the initial explosion, and
some observable evidence of this may still be around.
That this should be so was first pointed out by Gamow
himself, who argued that there must have been a con-
siderable amount of very hot black body radiation
present in this initial phase of the universe and most
of it must still be around, but in a very much red-shifted
form. He estimated that its temperatures would now
be 6°K. Without knowing about Gamow's suggestion,
Dicke proposed the same idea in 1964 (he called it
the “primordial fireball radiation”) and later, in collab-
oration with Peebles, Roll, and Wilkinson (1965), dem-
onstrated that the initial hot black body radiation (at
a temperature of 1010 degrees K) must now be black
body radiation (at a temperature of 3.5°K). The general
idea behind this deduction is the following: if the
universe was initially filled with very hot black body
radiation (that is, of very short wavelength), this radia-
tion would remain black body radiation during the
expansion of the universe, but it would become redder
and redder owing to the Doppler shift imparted to it
by the expansion. This is similar to radiation that is
reflected back and forth from the walls of an expanding
container. This 3.5°K black body radiation was de-
tected by Penzias and Wilson in 1965 and has since
been verified by other observers. It is present in the
form of isotropic, unpolarized microwave background
radiation in the wavelength range from 1/10 to 10 cm.
One other residual feature of the “big bang” should
still be visible, or at least amenable to verification—the
present helium abundance. During the initial fireball
period when the
temperature was considerably larger
than 1010 degrees K, the thermal
electrons and neu-
trinos that were present
would have resulted in very
nearly equal abundances of neutrons and
protons.
When the temperature of the fireball dropped to 1010
degrees
K these neutrons and protons would have
combined to form deuterium, which,
in turn, would
have been transformed into He4, and no heavier ele-
ments would have been formed. Two questions
then
arise. (1) Is the helium that we now observe all about
us, in our
own galaxy and in others, still this primordial
helium? (2) If so, what can
this tell us about the models
of our universe?
The evidence relating to the first question is some-
what ambiguous because we know that helium burning
occurs during
the giant stage of a star's evolution, so
that some of the original helium
must certainly have
been transformed into heavy elements in stellar inte-
riors, and thus disappeared. But we may
assume that
the helium that is found in stellar atmospheres is pri-
mordial and the evidence here is that
although there
is an overall helium abundance of about 25%, some
stars have
been observed with very weak helium lines.
In spite of these, however, the
overall evidence favors
the 25% abundance, which is in agreement with
the
“big bang” hypothesis.
Taking all of the observed data into account (the
3°K black body
radiation and the helium abundance)
the preponderance of the evidence
favors the “big
bang” theory and points to an age of
at least 1010,
i.e., ten billion years for our universe. The observed
helium abundance (if we accept 25% as the primeval
abundance) also
indicates that the density of matter
in the universe must be at least 4
× 10-31 grams per
cc. But if the density of matter in the universe
is no
larger than this, we run into difficulty with the obser-
vations on the rate at which the
expansion of the
universe is decelerating. We have already noted that
Humason, Mayall, and Sandage have given a value for
this deceleration which
indicates that the universe must
ultimately stop expanding and begin to
collapse. This
means that the correct model of the universe is an
oscillating one, rather than expanding, but, as we have
seen, this requires
the density of matter to be about
10-29 gms/cc, as compared to the observed
density of
7 × 10-31
In spite of this, the evidence for an oscillating uni-
verse has been greatly strengthened recently by the
analysis of
the distribution of quasars and of quasi-
stellar radio sources in general. Since these objects
(according to
their red shifts) are at enormous distances
from us, they give us the rate
of expansion of the
universe in its earliest stages. By comparing this
with
the present rate of expansion, we obtain a very reliable
value
for the deceleration, which shows the universe
to be oscillating. To
account for the discrepancy be-
tween the
observed and required density of matter for
such a model of the universe,
we must suppose that
there are large quantities of dark matter in inter-
galactic space—in the form
of hydrogen clouds, black
dwarf stars, and streams of neutrinos. But until
we have
direct evidence of this, we cannot be sure about the
validity
of the oscillating model.
X. THE STEADY-STATE THEORY AND
OTHER COSMOLOGIES
We shall conclude our discussion of modern cos-
mologies with brief descriptions of theories that are
related
to, but do not spring directly from, Einstein's
field equations, whether or
not we place λ = 0. Of
these, the most popular, and one which,
has been
strongly supported by outstanding cosmologists and
physicists, is the steady state or continuous creation
theory of Bondi and
Gold (1948) and Hoyle (1948).
On the basis of what they call the perfect cosmological
logical principle, they assert that
not only must the
universe present the same appearance to all
observers,
regardless of where they are, but it must appear the
same
at all times—it must present an unchanging as-
pect on a large scale. The immediate consequence of
this theory is that mass and energy cannot be conserved
in such a universe.
Since the universe is expanding,
new matter must be created spontaneously
and contin-
uously everywhere so as to
prevent the density from
decreasing.
It can be shown from this theory that matter would
have to be created at a
rate equal to three times the
product of the Hubble constant and the
present density
of the universe, in order to keep things as they are.
One nucleon must be created per thousand cubic cen-
timeters, per 500 billion years to maintain the status
quo.
Hoyle arrived at the same result by altering
Einstein's field equations.
Although the steady-state theory was very popular
because it eliminated
entirely the question of the origin
of the universe, it was rejected by
most cosmologists
because of its continuous creation and the
consequent
denial of the conservation of mass energy. But the
strongest argument against the steady state theory is
the existence of the
3°K radiation, which shows clearly
that our universe has evolved
from a highly condensed
state. In addition, the observed distribution of
quasars,
radio sources, and other distant celestial bodies shows
that
the density of matter in the universe was much
higher a few billion years
ago than it is now. The
observational evidence seems weighted against
the
steady-state theory.
Other general principles have been invoked to derive
cosmological theories.
Perhaps the most ambitious of
these theories is that of Eddington (1946),
who at-
tempted, in his later years, to deduce
the basic con-
stants of nature by combining
quantum theory and
general relativity. Starting from the idea that the
re-
ciprocal of the square root of the
cosmical constant
represents a natural unit of length in the universe,
and
that the number of particles in the universe must de-
termine its curvature, he derived numerical values for
such constants as the ratio of the mass of the proton
to that of the
electron, Planck's constant of action, etc.
But very few physicists have
accepted Eddington's
numerology since his analysis is often obscure,
difficult
to follow, and rather artificial. In any case, the exist-
ence of nuclear forces and new particles
which Ed-
dington was unaware of when he did
his work, and
which therefore are not accounted for in his theory,
destroys the universality which he claims for his theory.
During the period that Eddington was developing
his quantum cosmology, three
other cosmological sys
tems were introduced: the kinematic cosmology of
Milne (1935)
and the cosmologies of Dirac (1937) and
Jordan (1947). Although these
theories are extremely
interesting and beautifully constructed, we can
only
discuss them briefly here. Of all the cosmological theo-
ries that we shall have discussed in this
essay, Milne's
is the most deductive, for instead of starting with the
laws of nature as we know them locally, and then
constructing a model of
the universe based upon these
laws, he introduces only the cosmological
principle and
attempts to deduce, by pure reasoning, not only a
unique
model of the universe, but also the laws of
nature themselves. To do this,
Milne had to assume
the existence of a class of ideal observers attached
to
each particle of an ideal homogeneous universal sub-
stratum, which is expanding according to Hubble's law.
To carry out his analysis consistently, Milne had to
introduce two different
times; a kinematic time which
applies to the ideal observer and which also
governs
electromagnetic and atomic phenomena, and according
to which
the universe is expanding; and a dynamic
time, so that a good deal of
arbitrariness is inherent
in this theory, particularly at the boundary
region
where we pass from one kind of time to another. But
the major
objection to this theory arises from its basic
assumption that an absolute
substratum exists in the
universe, and that a privileged class of observers
is
associated with this substratum.
Although a cosmological principle of one sort or
another is at the basis of
the cosmologies which we
have discussed here, other types of principles
have also
been used. The most notable of these is that proposed
by
Dirac in 1937 (and later in a slightly different form
by Jordan), according
to which certain basic numbers
associated with matter and the universe are
not con-
stant, as had been assumed in all
previous cosmologies,
but vary with time. The numbers Dirac had in
mind
are certain dimensionless quantities which are obtained
by taking
the ratio of atomic quantities to cosmological
quantities of the same kind.
Dirac expressed this prin-
ciple as follows:
“All very large dimensionless numbers
which can be constructed
from the important constants
of cosmology and atomic theory are simple
powers of
the epoch.”
One consequence of this principle is that the univer-
sal gravitational constant would have to decrease with
time. But
one can show, as E . Teller did (1948), that
this would lead to a sun that
was much too hot during
the Cambrian period; the temperature of the
earth
would then have been so high that its oceans would
have been
boiling. Owing to this discrepancy, Dirac's
theory has generally been
discarded, although, more
recently, C. Brans and R. H. Dicke have
introduced
a variation of it, starting from a different point of view.
SUMMARY
At this point in our narrative, the reader may well
feel that modern
cosmology is a welter of conflicting
theories, all of which contain some
elements of truth,
but none of which gives a complete picture of the
actual universe. This, however, would be a wrong
conclusion to draw from
the present state of affairs.
It is true that a few years ago this would
have been
a fair assessment, since the observational evidence then
was
far too meager to permit us to choose from among
the various cosmologies
that stem from the basic field
equations. But even then, the common
heritage of all
of these theories (the general theory of relativity) indi-
cated that the basic differences among
them are more
apparent than real.
The situation in the early 1970's was quite different,
for a threshold had
been reached for a cosmological
breakthrough; as we have seen, enough
observational
evidence was available to show us that our universe
originated explosively, about ten to twenty billion years
ago, from a
highly condensed state. Even though we
still could not decide unequivocally
between an ex-
panding and an oscillating
universe on the basis of the
observational evidence, the major problem of
the origin
of the universe had been solved and we had a self-
consistent picture. It accounted not
only for the reces-
sion and distribution of
the distant galaxies but also
for many diverse phenomena, ranging from the
back-
ground radiation all around us in
space (the 3° K. iso-
tropic
radiation which we have already discussed) to the
formation of the stars
and the heavy elements.
The most remarkable thing about the state of matter,
whether in the form of
stars or interstellar dust and
gas all around us, is that it points to some
momentous
event that must have occurred some billions of years
ago and
which led to the pronounced differentiation
that we see now. Starting from
the “big bang,” to
which all these observations
point, we can now arrange
the succession of events that led to the present
state
of the universe into a well-ordered, meaningful, and
understandable sequence. After the original explosion,
when the temperature
was still very high, about 30%
of the primordial neutrons and protons were
fused into
He4, but the expanding gas cooled off much too rapidly
for
elements above He4 to be built up in any appreci-
able quantities, and these had to wait for the stellar
ovens
that were to be formed when the rapidly ex-
panding gas of hydrogen and helium was fragmented
into stars by
turbulence and the gravitational forces.
The fragmentation of the original hydrogen-helium
gaseous mixture into
galaxies and stars occurred when
the exploding universe had cooled off to
very nearly
its present value—about two hundred million
years
after the initial explosion. The density of matter and
radiation was then favorable for gravitational contrac-
tion to take over in local regions
and to compress the
gas into huge clouds. This, however, could occur
only
after another process had come into operation—the
natural and unavoidable fragmentation of the expand-
ing gas into local eddies. One can show that a stream
of gas
becomes unstable against such a fragmentation
when the length of the stream
exceeds a certain num-
ber whose value can be
derived from hydrodynamical
theory. In an expanding universe this is bound
to hap-
pen after the expansion has progressed
beyond a given
point. The average size of the turbulent eddies that
are formed during this kind of fragmentation is deter-
mined by the speed and density of the expanding gas.
The details of this fragmentation process were
worked out many years ago by
J. H. Jeans. According
to his calculation, we know that the expanding gas
must
have broken up into fragments having an average size
equal to
that of a typical galaxy. These galaxies in turn
also suffered
fragmentation (on a smaller scale) by the
same process and the oldest stars
were thus formed.
These oldest stars (about 8 billion years old) were
formed at the center of the galaxies; and that is where
we find them now,
although they also constitute the
globular clusters that surround the core
of a galaxy.
Since the very oldest stars were formed almost ex-
clusively from the primordial hydrogen and helium,
at least some
of the heavy elements that we now ob-
serve all
about us in the universe must have been
synthesized in the interiors of
these stars as they
evolved. This, indeed, is the case, for we now
know,
from the theory of stellar interiors, that thermonuclear
processes occur near the center of a star, resulting in
the transmutation
of the light to the heavy elements.
When the oldest stars were first
formed, they con-
tracted very rapidly until
their central temperatures
reached about 10 million degrees, at which point
ther-
monuclear energy was released
with the transformation
of hydrogen to helium; this process kept the stars
in
equilibrium and supplied them with their energy for
the first few
billion years of their lives—in fact, until
about 12% of their
hydrogen had been transformed into
helium.
The core of each star, consisting entirely of helium,
then began to contract
quite rapidly under its own
weight, and the central temperature rose (in a
few
hundred million years) to about 100,000,000 degrees.
At this high
temperature, the helium nuclei in the core
were transformed to
carbon—the first step in the
buildup of the heavy elements. This
led to the forma-
tion of a carbon core which
contracted still further,
resulting in still higher core temperatures. In
fact, the
temperature in the core continued to increase until the
billion degree mark was reached, and the heavy ele-
that point a drastic change occurred in the evolution
of the star, for very little of its nuclear fuel was left
to supply the energy required to support its own
weight. The star, which by this time had evolved into
a very large and luminous object, collapsed violently
and became a supernova, ejecting great quantities of
material from its outer regions.
Following the supernova explosion, the hot residual
core (consisting of such
nuclei as iron, calcium, magne-
sium, and free
electrons) continued to contract, finally
becoming a white dwarf of
enormous density. It re-
mains in this stage
when the outward pressure of the
free electrons just balances the
gravitational contrac-
tion. But this is not
so in all cases, and the star must
continue to contract beyond the white
dwarf stage if
it is massive enough—ultimately becoming a very
hot
neutron star, about ten miles in diameter. Although
such stars
have not yet been observed directly, astron-
omers believe that they constitute some of the X-ray
sources now
being observed and are the recently dis-
covered “pulsars.” But even neutron stars are not
the
final stage of stellar evolution, for the theory of relativ-
ity tells us that such stars must
continue to contract
until they disappear from sight.
But what of the material that was ejected from each
star that became a
supernova? This was swirled into
the outer regions of the galaxy, where it
became the
gas and dust that formed the spiral arms that we now
see.
From this gas and dust—consisting not only of the
primordial
hydrogen and helium, but also of such heavy
elements as carbon, oxygen,
sodium, calcium, and
iron—the second generation, and hence
younger stars
such as our sun, were formed. But something else
happened at the same time—planets were also formed.
It can be
shown, as has been done by C. F. von
Weizsäcker, G. F. Kuiper, H.
Urey, H. Alfvén, and
others, that the turbulences that must occur
when a
star like the sun is formed by gravitational contraction,
from
dust and gas, must lead to the formation of planets
at fairly definite
distances from the star. This is in
agreement with the arrangement of the
planets in our
solar system.
We thus see that the cosmological theories that stem
from Einstein's
gravitational field equations agree with
the overall architectural and
dynamical features of the
universe as we now observe them. At the same
time,
these theories show us how the present state of the
universe has
evolved from a highly condensed initial
state, and tell us what to expect
in the future evolution
of the universe. Although many of the details are
still
missing from this forecast, the dominant features are
clearly
indicated, and we have every reason to believe
that we shall soon be able
to answer most of the ques
tions about the universe that seemed so unanswerable
just a few
years ago, for never before in the history
of science have so many capable
scientists been work-
ing on this exciting
problem.
BIBLIOGRAPHY
R. Alpher and R. Herman, Reviews of Modern Physics,
22 (1950), 153. H. Bondi and T. Gold, Monthly Notices,
Royal Astronomical Society,
108 (1948), 252. R. H. Dicke,
P. J. E . Peebles, P.
G. Roll, and D. T. Wilkinson, The
Astrophysical
Journal,
142 (1965), 414. P. A. M. Dirac,
Proceedings of the Royal Astronomical Society, A,
165 (1938),
199. A. S. Eddington, The Expanding Universe (Cambridge,
1933); idem,
Fundamental Theory (Cambridge, 1946). A.
Einstein, Sitzungsberichte der Preussische
Akademische
Gesellschaft,
142 (1917). A. Friedmann, Zeitschrift für
Physik,
10 (1922), 377. G. Gamow, Physical
Review,
70
(1946), 572; 74 (1948),
505. E . R. Harrison, Monthly Notices,
Royal
Astronomical Society,
131 (1965), 1. F. Hoyle, Monthly
Notices, Royal Astronomical Society,
108 (1948), 372. M. L.
Humason, N. U. Mayall, and
A. Sandage, The Astronomical
Journal,
61 (1956), 97. J. Jeans, Astronomy
and Cosmology
(Cambridge, 1928, reprint 1961). P. Jordan,
Die Herkunft
der Sterne (Stuttgart,
1947). G. Lemaître, Monthly Notices,
Royal
Astronomical Society,
91 (1931), 490. W. H. McCrea
and G. C. McVittie,
Monthly Notices, Royal Astronomical
Society,
92 (1931), 7. A. A. Michelson and E . M. Morley,
Philosophical Magazine,
190 (1887), 449. E . A. Milne, Rela-
tivity, Gravitation, and World
Structure (Oxford, 1935). C. G.
Neumann, Über das Newtonische Prinzip der Fernwirkung
(Leipzig, 1895). A. A. Penzias and R. W. Wilson, The Astro-
physical Journal,
142 (1965), 419. H. P. Robertson, The
Astrophysical Journal,
82 (1935), 284; 83 (1936), 187,
257.
A. Sandage, The Astrophysical Journal,
133 (1961), 335. H.
Seeliger, Astronomische Nachrichtungen,
137 (1895), 129. W.
de Sitter, Monthly Notices, Royal Astronomical Society,
78
(1917), 3. R. C. Tolman, Relativity, Thermodynamics, and
Cosmology (Oxford, 1932). A.
G. Walker, Proceedings of the
London Mathematical
Society (2), 42 (1936),
90. H. Weyl,
Physikalische Zeitschrift,
24 (1923), 230.
GENERAL BIBLIOGRAPHY
H. Bondi, Cosmology (Cambridge, 1961); Rival Theories of
Cosmology (Oxford, 1960). P.
Couderc, The Expansion of the
Universe (London,
1952). G. Gamow, The Creation of the
Universe
(New York, 1952). E . Hubble, Realm of the
Nebulae
(Oxford, 1961). G. C. McVittie, Fact and
Theory in Cosmology
(New York, 1961). M. K. Munitz, ed., Theories of the Universe
(New York, 1957). D.
Sciama, The Unity of the Universe
(Garden City,
N.Y., 1961). J. Singh, Great Ideas and Theories
of
Modern Cosmology (London, 1961). W. de Sitter, Kosmos
(Cambridge, Mass., 1932). E . Teller, Physical Review,
73
(1948), 801.
LLOYD MOTZ
[See also Cosmic Images; Cosmic Voyages; Cosmology fromAntiquity to 1850; Infinity; Relativity; Space; Time.]
Dictionary of the History of Ideas | ||