University of Virginia Library

Search this document 
Dictionary of the History of Ideas

Studies of Selected Pivotal Ideas
  
  
expand section 
  
expand section 
  
  

expand sectionVI. 
expand sectionV. 
expand sectionVI. 
expand sectionI. 
expand sectionVI. 
expand sectionV. 
expand sectionIII. 
expand sectionIII. 
expand sectionVI. 
expand sectionVI. 
expand sectionV. 
expand sectionV. 
expand sectionIII. 
expand sectionVII. 
expand sectionVI. 
expand sectionVI. 
expand sectionIII. 
expand sectionIII. 
expand sectionII. 
expand sectionI. 
expand sectionI. 
expand sectionI. 
expand sectionV. 
expand sectionVII. 
expand sectionVI. 
expand sectionV. 
expand sectionIII. 
expand sectionIII. 
expand sectionIII. 
expand sectionII. 
expand sectionI. 
expand sectionI. 
expand sectionI. 
expand sectionVI. 
expand sectionVII. 
expand sectionIII. 
expand sectionVII. 
expand sectionVII. 
expand sectionVII. 
expand sectionV. 
expand sectionVI. 
expand sectionVI. 
expand sectionVI. 
expand sectionVI. 
expand sectionVI. 
expand sectionVII. 
CHANCE
expand sectionIII. 
expand sectionIV. 
expand sectionVI. 
expand sectionVI. 
expand sectionVI. 
expand sectionV. 
expand sectionV. 
expand sectionV. 
expand sectionIII. 
expand sectionIII. 
expand sectionVII. 
expand sectionIII. 
expand sectionI. 
expand sectionV. 
collapse sectionV. 
  
  
  
  
  
expand sectionVII. 
expand sectionVI. 
expand sectionI. 
expand sectionI. 
expand sectionI. 
expand sectionI. 
expand sectionVI. 
expand sectionIII. 
expand sectionIV. 
expand sectionIII. 
expand sectionIV. 
expand sectionIV. 
expand sectionIV. 
expand sectionVI. 
expand sectionVI. 
expand sectionVI. 
expand sectionV. 
expand sectionIII. 
expand sectionVI. 

CHANCE

So far as we can judge, primitive man does not con-
ceptualize his world of experience in any comprehen-
sive way. To him, some events just happen; some he
can control himself; some he can influence by sympa-
thetic magic; for some he can enlist the aid of the
unseen world of spirits which surrounds him. He knows
of no general laws; and hence he knows of no absence
of general laws. If he ever thought about the matter
at all he might, perhaps, have considered that many
events happen simply because they fall that way; and
their falling so (Old French la cheance, from Latin
cadere) was in the nature of the world, as we should
say today, “just one of those things.”

The emergence of more organized thought and lan-
guage was slow to change essential ideas about hap-
penings. As man collected his experiences, formed and
named his concepts, and began to perceive regularities
in the heavens and on earth, he developed the idea
of cause and effect, and as time went on, it seemed
to him that more and more events are causally linked.
But whether every event had a cause was a question
which he was late in asking (and for that matter, has
not yet answered). Some events were explicable in a
straightforward way; but others were equally certainly
inexplicable, and many more had to be explained in
terms of minor deities invented for the purpose. In
polytheistic societies, such as the Egyptian, the Greek,
and the Roman, it was held possible to influence events
by enlisting the aid of some superhuman being, with
sacrifice, donation, or even punishment (as when tribes
thrashed their idols); but these beings themselves were
not omnipotent and it would seem—though the records
are, not surprisingly, silent on such questions—that a


336

great part of the manifestation of the world was re-
garded as proceeding blindly without direct interven-
tion of God or man, or without being subject in all
its aspects to law.

Nevertheless, nature proceeded in a manner which
man perceived more and more to be orderly. We now
encounter one of those peculiar dichotomies of which
history affords so many instances: the emergence of
gambling, on the one hand, and the employment of
fortuitous events for divination, on the other. The
gambler deliberately threw his fortunes at the mercy
of uncontrolled events; the diviner used uncontrolled
events to control his future.

The Germans of Tacitus' time, for example, decided
many of their tribal procedures by a random process.
The priests would write a number of runes on slips
of bark, offer a prayer for guidance, choose one hap-
hazardly, and follow the advice which it gave (or, at
any rate, gave according to their interpretation). The
Jews made important choices by lot. The Romans had
their Sybilline books and their Etruscan custom of
haruspication (divination from entrails). To modern
eyes such procedures would look very like settling a
doubtful issue by tossing up for it, but that was not
how it appeared to the ancients. It was their way of
interrogating their Deity, of referring the decision to
a Better Informed Authority.

At the same time, gambling became widespread. One
of the oldest poems on record, in the Rig-Veda, is a
Gambler's Lament, in which the poet bewails the loss
of all his possessions but, unfortunately for us, says
nothing about the kind of game he was playing. In
very early settlements there occur deposits of huckle
bones (small bones in the foot of sheep or goat) which
were assembled by man, almost certainly for playing
some kind of game. These “astragali” have four clearly
defined surfaces and were probably the antecedents of
the ordinary six-faced cube or die, specimens of which
are datable as far back as 3000 B.C.

The Greeks thought poorly of dice-playing. For them
it was an amusement for children and old men. This,
among other things, may be the reason why no Greek
writers other than Aristotle and Epicurus showed any
interest in chance, and as far as is known, none arrived
at any idea of the statistical regularities embodied in
series of repetitive events. The Romans were inveterate
gamblers, especially in Imperial times; the emperor
Claudius wrote a treatise on dice, which unfortunately
has not survived. The Germans were even worse and
an individual would on occasion gamble himself into
slavery. We know a little about the type of dice-playing
which was indulged in. It was almost certainly the
ancestor of the medieval game of hazard, itself the
ancestor of the American game of craps. (The word
“hazard,” from Arabic al zhar, “the die,” was probably
brought back to Europe by the Crusaders. It was the
name of a game, not a concept of random occurrence.)

Just how much the ancients knew about calculating
chances is doubtful, but it cannot have been very exact
knowledge, even though a gambler can hardly fail to
have formed some notion of regular occurrence “in
the long run.” Early examples exist of loaded dice,
which indicates that some persons at least were not
content to leave things in the lap of the goddess For-
tuna. But anything approaching a calculus of chances
was not even adumbrated.

The advent of Christianity, and later of Islam,
brought about a number of important changes, both
in the philosophical concept of chance and in moral
attitudes towards gambling. To the monotheist every
event, however trivial, was under the direction of the
Almighty or one of his agents. In this sense there was
no chance. Everything happened under the divine
purpose. Hence there grew up the belief that events
which we describe as fortuitous or random or subject
to chance are no different from any other happenings,
except that we do not know why they happen. Chance,
then, became a name which man gave to his own
ignorance and not a property of events or things.

This belief has endured until the present day. Saint
Augustine, Saint Thomas Aquinas, Spinoza all held it.
The physicists of the nineteenth century mostly sub-
scribed to it, though not necessarily for theological
reasons. The more Nature was discovered to be subject
to law (or, if one prefers it, the more man shaped his
concepts into regular patterns to correspond with ob-
servation), the more it became evident that “chance”
events appeared as such only because something re-
mained to be discovered or because their causality was
too complex for exact analysis. In the first half of the
twentieth century we find a distinguished French
probabilist, Paul Lévy, remarking that chance ap-
peared to him to be a concept invented by man which
was unknown to Nature; and Einstein, notwithstanding
developments in subatomic physics (see below), never
accepted chance as an essential unanalyzable element
of the universe.

We return to the effect of Christianity on the concept
of chance. Augurs, sybils, diviners, prognosticators
generally, were frowned on by the Church from early
times. This was not merely because the new priesthood
could tolerate no competition from the old. Under the
new religion it was impious to interrogate God by
forcing Him, so to speak, to disclose His intentions.
Moreover, gaming soon became associated with less
socially tolerable activities—drinking, blasphemy,
violence—and as such was sternly discouraged. We still
possess a sermon of Saint Cyprian of Carthage against


337

gamblers; more than a thousand years later, Saint Ber-
nardino of Siena was inveighing against gambling and
its vices to the same tune. None of this, of course,
arrested gaming for very long. The number and fre-
quency of the edicts issued against gaming are sufficient
evidence of its prevalence, on the one hand, and its
persistence, on the other. However, ineradicable as
gambling proved to be, the official attitude of the
Church was probably strong enough to prevent any
serious study of it.

Up to the middle of the fourteenth century the main
instruments of gaming were dice. The Western world
then invented or acquired playing cards, whose precise
origin, numerous legends notwithstanding, is still un-
known. Cards began to displace dice, but more slowly
than might have been expected, probably on account
of their cost. It was not until the beginning of the
eighteenth century that dice began to lose their popu-
larity in favor of cards. Roulette wheels and one-armed
bandits are, of course, products of modern technology.

It might have been supposed that, after playing with
astragali, dice, and cards for several thousand years,
man would have arrived relatively early at some con-
cept of the laws of chance. There is no evidence that
he did so much before the fourteenth century, and even
then, after faint beginnings, it was three hundred years
before the subject began to be understood. The earliest
European record of any attempt to enumerate the
relative frequency of dice-falls occurs in a medieval
poem De vetula (dated somewhere between 1220 and
1250), one manuscript of which contains a tabulation
of the ways of throwing three dice. It is an isolated
contribution and for the next recorded attempt at the
calculation of chances we have to notice a treatise on
card-play by the gambling scholar, Girolamo Cardano.
This remarkable man, part genius and part charlatan,
was an inveterate gambler and a very competent
mathematician. His book, written perhaps in 1526 but
published only posthumously (1663), contains a clear
notion of the definition of chances in terms of the
relative frequency of events and of the multiplicative
law of independent probabilities. A translation into
English and a biography of Cardano by Oystein Ore
appeared in 1953.

So far as concerns extant literature, Cardano's work
is also isolated. Some Italian mathematicians of the
sixteenth century considered a few problems in dice-
play, and in particular, we have a fragment by Galileo
(about 1620), in which he correctly enumerates the falls
of three dice. Undoubtedly there must have been much
discussion about chances, especially in those countries
where men of science mingled freely with men of
affairs; but little or nothing was published. The calculus
of chances as we know it first became the subject of
general mathematical interest in France at the closing
half of the seventeenth century, in the form of corre-
spondence between Pascal and Pierre de Fermat. The
time was ripe for a rapid expansion of the mathe-
matical theory of chance. The first book on the subject,
by Christiaan Huygens, was published in 1657. In 1713
there appeared the remarkable study by James (Jakob
or Jacques) Bernoulli called Ars conjectandi in which
he derived the so-called binomial distribution and
raised the fundamental question of the convergence
of proportions in a series of trials to a “true” chance.
Once so launched, the mathematical theory advanced
rapidly. A little over a hundred years later appeared
a major masterwork, Pierre Simon de Laplace's Théorie
analytique des probabilités
(1812). The subject was by
now not only interesting and respectable, but applica-
ble to scientific problems and, before long, to commer-
cial and industrial problems. It has been intensively
cultivated ever since.

In one respect commerce took advantage of chance
events. Some Italian shops of the fifteenth century
would have a sack full of small presents standing by
the counter and would invite customers to take a lucky
dip. This lotteria developed into the present-day system
of raising money by selling chances on prizes. The
system spread over Europe but lent itself so readily
to fraud that it was either forbidden or, in most coun-
tries, conducted as a state monopoly.

The subject which was formerly called the Doctrine
of Chances, and is now more commonly but less accu-
rately called the Mathematical Theory of Probability,
is mostly a deductive science. Given a reference set
of events and their probabilities, the object is to work
out the probabilities of some contingent event; e.g.,
given that the chance of throwing any face of a die
is 1/6, find the probability that all six faces will appear
in a given number of throws greater than six. Inter-
esting as the subject is to the mathematician and useful
as it may be to the statistician, it is not of concern
in the history of ideas except insofar as its results are
required, as we shall see below, in scientific inference.

Once again we must go back a little in time. At the
end of the seventeenth century the philosophical stud-
ies of cause and chance, and the mathematics of the
Doctrine of Chances were poles apart. They now began
to move closer together. It was not long before the
events of the dice board and the card table began to
be seen as particular cases of fortuitous events of a
more general kind, emanating in some rather mysteri-
ous way which conjured order out of chaos. In short,
it began to be realized that chance, which conceptually
was almost the negation of order, was subject to law,
although to law of a rather different kind in that it
admitted exceptions. The English savant, Dr. John


338

Arbuthnot, for example, became interested in the
equality of the sex ratio at birth and saw something
of Divine Providence in the phenomenon by which
the apparently random occurrence of the individual
event resulted cumulatively in a stable sex ratio. Thirty
years later, J. P. Süssmilch, an honored name in the
history of statistics, reflected the same thought in the
title of his magnum opus on the divine order: Die
Göttliche Ordnung
(1741). In one form or another the
idea has remained current ever since. There are few
people who have reflected on the curious way in which
random events have a stable pattern “in the long run”
who have not been intrigued by the way in which order
emerges from disorder in series of repeated trials. Even
events which recur relatively infrequently may have
a pattern; for example, the nineteenth-century Belgian
astronomer and statistician Adolphe Quételet, one of
the fathers of modern statistics, was struck by
L'effrayante exactitude avec laquelle les suicides se
reproduisent
(“The frightening regularity marking the
recurrence of suicides”).

During the eighteenth and nineteenth centuries the
realization grew continually stronger that aggregates
of events may obey laws even when individuals do not.
Uncertain as is the duration of any particular human
life, the solvency of a life insurance company is guar-
anteed; uncertain as may be the sex of an unborn child,
the approximate equality of numbers of the two sexes
is one of the most certain things in the world.

This development had an important impact on the
theory of chance itself. Previously chance was a nui-
sance, at least to those who wished to foresee and
control the future. Man now began to use it for other
purposes, or if not to use it, to bring it under control,
to measure its effect, and to make due allowance for
it. For example, errors of observation were found to
follow a definite law, and it became possible to state
limits of error in measurements in precise probabilistic
terms. In the twentieth century we have seen similar
ideas worked out to a high degree of precision: in the
theory of sampling, where we are content to scrutinize
only a subset of a population, relying on the laws of
chance to give us a reasonably representative subset;
or in the theory of experimental design, in which un-
wanted influences are distributed at random in such
a way that chance destroys (or reduces to minimal risk)
the possibility that they may distort the interpretation
of the experiment. Man cannot remove chance effects,
but he has learned to control them.

In practice, there is little difference of opinion
among the experts as to what should be done in any
given set of practical circumstances affected by random
influences. But, though they may agree on procedure
and interpretation, there underlies the theory of chance
and probability a profound difference of opinion as to
the basis of the inferences which derive from probabil-
istic considerations.

We must now draw a distinction between chance
and probability. To nearly all medieval logicians prob-
ability was an attitude of mind. It expressed the doubt
which a person entertained towards some proposition.
It was recognized (e.g., by Aquinas) that there were
degrees of doubt, although nobody got so far as to
suggest that probability could be measured. It was not
necessarily related to the frequency with which an
event occurred. Saint Thomas would have considered
the word “probability” as equally applicable to the
proposition that there was a lost continent of Atlantis
as to the proposition that next summer will be a fine
one.

The Doctrine of Chances, on the other hand, was
related to the relative frequency of occurrence of the
various modalities of a class of events. The two ideas
have been confounded over the centuries, and even
today there are strongly differing schools of thought
on the subject. One school takes probability as a
more-or-less subjective datum, and would try to em-
brace all doubtful propositions, whether relating to
unique or to repetitive situations, within a probabilistic
theory of doubt and belief. The other asserts that nu-
merical probabilities can be related only to relative
frequency. Both points of view have been very ably
expounded, the main protagonists of the subjective
viewpoint being Bruno de Finetti and L. J. Savage and
those of the frequency viewpoint, John Venn (1866)
and R. von Mises (1928). The two are not, perhaps,
irreconcilable, but they have never been successfully
reconciled, at least to the satisfaction of the partici-
pants in the argument. The nearest approach, perhaps,
is that of Sir Harold Jeffreys (1939).

To modern eyes, the matter becomes of critical
importance when we realize that all science proceeds
essentially from hypotheses of doubtful validity or
generality through experiment and confirmation, to
more firmly based hypotheses. The problem, then, is
whether we can use probability theory, of whatever
basic character, in the scientist's approach to forming
his picture of the universe. The first man to consider
the problem in mathematical detail was Thomas Bayes,
a Methodist parson whose paper was published post-
humously (1764) and whose name is now firmly at-
tached to a particular type of inference. Shorn of its
mathematical trappings, Bayesian inference purports
to assign numerical probabilities to alternative hy-
potheses which can explain observation. It can do so
only by assigning prior probabilities to the hypotheses,
prior, that is, in the sense that they are given before
the observations are collected. Here rests the conflict


339

between the Bayesians and the anti-Bayesians. The
former like to express their degree of doubt about the
alternative hypotheses at the outset in terms of numer-
ical probabilities, and to modify those probabilities in
the light of further experience; the latter prefer to
reserve their initial doubts for a final synthetic judg-
ment at the conclusion of the experiment. The course
of thought during the nineteenth century was un-
doubtedly influenced by Laplace, who accepted Bayes's
treatment, although recognizing the difficulty of re-
solving many practical situations into prior alternatives
of equal probability.

The basis of the controversy may be set out in fairly
simple terms. A naïve statement of an argument in
scientific inference would run like this:

On a given hypothesis a certain event is to be expected;
We experiment and find that the event is rea- sonably closely realized (or not realized);
We accept (or reject) the hypothesis, or at any rate regard it as confirmed (or not confirmed).
Such an enunciation requires some sophistication. The
question is whether, if we interpret “to be expected”
in terms of probability in the sense of the Doctrine
of Chances (e.g., on the hypothesis that a penny is
unbiased the chances are that in 100 tosses it will come
down heads about half the times), we can, so to speak,
invert the situation and make numerical statements
about the probability of the hypothesis. Bayes saw the
problem, but to attain practical results, he had to
assume a postulate to the effect that if a number of
different hypotheses were exhaustive and all consonant
with the observed event, and nothing is known to the
contrary, they were to be supposed to have equal prior
probabilities. This so-called Principle of Indifference
or of Nonsufficient Reason has been warmly contested
by the anti-Bayesians.

There seems to be no decisive criterion of choice
between the Bayesian and non-Bayesian approaches.
As with attitudes towards frequency or nonfrequency
theories of probability, a man must make up his own
mind about the criticisms that have been made of each.
Fortunately, in practice conclusions drawn from the
same data rarely differ—or if they do it appears that
the inference is entangled with personal experiences,
emotions, or prejudices which are not common to both
parties to the dispute.

Until the end of the nineteenth century, chance and
probability, however regarded axiomatically, were still
considered by most scientists and philosophers alike
as expressions of ignorance, not as part of the basic
structure of the world. The fall of a die might be the
most unpredictable of events, but its unpredictability
was due to the fact that we could not compute its
trajectory with any accuracy; given enough informa-
tion about initial conditions and sufficient mathe-
matical skill we could calculate exactly how it would
fall and the element of chance would vanish. Notwith-
standing the philosophic doubts raised by Hume and
his successors about causality, the world was (and still
is) interpreted by most people in a causal way. The
laws of chance were not sui generis; they were the
result of the convolution of a multiplicity of causes.
As A. Cournot put it, following Aristotle, a chance
event was the result of the intersection of many caus-
ally determined lines.

This edifice began to crack with the discovery of
radioactivity. Here were phenomena which appeared
to generate themselves in a basically chance manner,
uninfluenced by pressure, temperature, or any external
change which man could induce in their environment.
It has even been suggested that a truly random se-
quence could be generated by noting the intervals
between impacts in a Geiger counter. It began to look
as if chance behavior was part of the very structure
of the atomic world, and before long (ca. 1925), P. A. M.
Dirac, Werner Heisenberg, and others were expres-
sing subatomic phenomena as waves of probability.

We are still fairly close to the period in which these
ideas were put forward, and in assessing them we have
to take into account the general cultural and psycho-
logical environment of the times. Immediately after
the First World War there was an upsurge of revolt
against the repressive society of the later nineteenth
century, and any idol which could be shown to have
feet of clay was joyfully assaulted. Scientists, whether
natural or social, are no more immune than poets to
such movements. The warmth of the reception given
to the theory of relativity (far more enthusiastic than
the experimental evidence justified), to the quantum
theory, and to Freudian psychology was in part due
to this desire to throw off the shackles of the past; and
the elevation of chance to a fundamental rule of be-
havior may have embodied a similar iconoclastic ele-
ment. It is too soon to say; but now that the honeymoon
period is over there are some who would revert to the
older view and consider that perhaps it is our ignorance
again which is being expressed in the probabilistic
element of modern physics.

There remain, then, several important questions on
which unanimity is far from being reached: whether
a theory of probability can embrace attitudes of doubt
of all kinds, whether chance phenomena are part of
the basic structure of the world, what is the best
method of setting up a theory of inference in terms


340

of probability, whether all probabilities are measur-
able. Perhaps these questions may not be resolved until
a great deal more knowledge is gained about how the
human mind works. In the meantime the theory of
probability continues to develop in a constructive
manner and is an important adjunct to man's efforts
to measure and control the world.

BIBLIOGRAPHY

J. Arbuthnot, “An Argument for Divine Providence,
Taken From the Constant Regularity Observ'd in the Birth
of Both Sexes,” Philosophical Transactions of the Royal
Society,
27 (1710), 186-90. T. Bayes, “An Essay Towards
Solving a Problem in the Doctrine of Chances,” Philo-
sophical Transactions of the Royal Society,
53 (1763),
370-418. Jakob (Jacques) Bernoulli, Ars conjectandi (Basel,
1713, posthumous; Brussels, 1968). Rudolf Carnap, Logical
Foundations of Probability
(Chicago, 1950; 2nd ed. 1962).
A. A. Cournot, Essai sur les fondements... (1851), trans.
M. H. Moore as Essay on the Foundations of our Knowledge
(New York, 1956), Chs. IV, V, VI. F. N. David, “Studies in
the History of Probability and Statistics, I. Dicing and
Gaming,” Biometrika, 42 (1955), 1; idem, Games, Gods and
Gambling
(London, 1962). Bruno de Finetti, “La Prévision:
ses lois logiques, ses sources subjectives,” in Annales de
l'Institut Henri Poincaré,
7 (1937), 1-68; trans. H. E. Kyburg,
Jr., in Studies in Subjective Probability (New York, 1964).
Sir Harold Jeffreys, Theory of Probability (Oxford, 1939; 3rd
ed. 1961). M. G. Kendall, “On the Reconciliation of Theories
of Probability,” Biometrika, 36 (1949), 101; idem, “Studies
in the History of Probability and Statistics, II. The Begin-
nings of a Probability of Calculus,” Biometrika, 43 (1956),
1; ibid., V. “A Note on Playing Cards,” Biometrika, 44 (1957),
260. J. M. Keynes, A Treatise on Probability (London, 1921).
Pierre Simon, Marquis de Laplace, Théorie analytique des
probabilités
(Paris, 1812); is found in A Philosophical Essay
on Probabilities,
trans. F. L. Truscott and F. L. Emory
(London and New York, 1902; New York, 1951). Oystein
Ore, Cardano, The Gambling Scholar (Princeton, 1953). L. J.
Savage, Foundation of Statistical Inference, 2nd ed. (New
York, 1964). J. P. Süssmilch, Die Göttliche Ordnung in den
Veränderungen des menschlichen Geschlechts aus der Geburt,
dem Tode und der Fortpflanzung desselben verwiesen
(Berlin,
1741). Isaac Todhunter, A History of the Mathematical The-
ory of Probability
... (Cambridge and London, 1865). John
Venn, The Logic of Chance (London, 1866). R. von Mises,
Wahrscheinlichkeit, Statistik und Wahrheit (Vienna, 1928);
trans. as Probability, Statistics and Truth (London, 1939;
New York, 1961).

MAURICE KENDALL

[See also Causation; Chance Images; Epicureanism; For-
tune; Free Will and Determinism; Game Theory; Indeter-
minacy;
Number; Probability.]