User:Thefringthing/Probability interpretations

Philosophical interpretation of the axioms of probability From Wikipedia, the free encyclopedia

An interpretation of probability explains how the mathematical theory of probability relates to real-world phenomena. Schools of thought differ on whether probabilities are associated with evidential relationships, beliefs, or physical systems, whether they are objective or subjective, and other issues.

Probability and statistics are closely related, and some probability interpretations are associated with corresponding approaches to statistical inference, connecting the philosophical question of the meaning of probability statements to practical issues of statistical methodology.

Preliminary considerations

The theory of probability originated in correspondence between Blaise Pascal and Pierre de Fermat in the seventeenth century[1] and was formalized by Andrey Kolmogorov in the twentieth century. The nature and meaning of mathematical statements, including the axioms and theorems of probability theory, is an important topic within the philosophy of mathematics.[2][3]

While the concept of probability originated in the study of random processes such as drawing playing cards and rolling dice, probability claims are also made in situations that do not seem to involve any such process. For example, it is implausible that someone asked which mountain is the tallest in our solar system who responds, "it's probably Olympus Mons" intends to assert something about planetary geological processes; instead, such statements may be better understood as qualifying a belief with a degree of confidence. When it is written that "the most probable explanation" of the name of Ludlow, Massachusetts is that it was named after Roger Ludlow, what is meant is not that Roger Ludlow is the most probable outcome of a certain random process, but rather that this is the explanation with the strongest evidential relationship to the observed fact. An interpretation of probability should ideally explain how each of these phenomena, i.e., outcomes of random processes, degrees of confidence, and evidential relationships, are related to mathematical probability.

Probability is the most important concept in modern science, especially as nobody has the slightest notion what it means.

Bertrand Russell, 1929 Lecture, cited in The Development of Mathematics (1945), E. T. Bell

Probability interpretations may be divided into two broad categories: the evidential, epistemic, or subjective interpretations, and the physical, empirical, or objective interpretations. Note that these terms can be misleading and the proponents of a given interpretation may not universally endorse any of them.

Evidential interpretations

Evidential probabilities can be assigned to any statement whatsoever, even when no random process is involved, as a way to represent its subjective plausibility, or the degree to which the statement is supported by the available evidence. On most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds. The four main evidential interpretations are the classical (e.g. Laplace's)[4] interpretation, the subjective interpretation (de Finetti[5] and Savage[6]), the epistemic or inductive interpretation (Ramsey,[7] Cox[8]) and the logical interpretation (Keynes[9] and Carnap[10]). There are also evidential interpretations of probability covering groups, which are often labelled as 'intersubjective' (proposed by Gillies[11] and Rowbottom[12]).

Classical interpretation

The first attempt at mathematical rigour in the field of probability, championed by Pierre-Simon Laplace, is now known as the classical definition. Developed from studies of games of chance (such as rolling dice) it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely.[13] (3.1)

The theory of chance consists in reducing all the events of the same kind to a certain number of cases equally possible, that is to say, to such as we may be equally undecided about in regard to their existence, and in determining the number of cases favorable to the event whose probability is sought. The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.

Pierre-Simon Laplace, A Philosophical Essay on Probabilities[4]

The classical definition of probability works well for situations with only a finite number of equally-likely outcomes.

This can be represented mathematically as follows: If a random experiment can result in N mutually exclusive and equally likely outcomes and if NA of these outcomes result in the occurrence of the event A, the probability of A is defined by

There are two clear limitations to the classical definition.[14] Firstly, it is applicable only to situations in which there are only a finite number of possible outcomes. But some important random experiments, such as tossing a coin until it shows heads, give rise to an infinite set of outcomes. And secondly, it relies on the potentially circular "principle of insufficient reason": that all possible outcomes are equally likely if there is no reason to assume otherwise.

Subjective interpretations

Subjectivists, also known as Bayesians or followers of epistemic probability, give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation. Epistemic or subjective probability is sometimes called credence, as opposed to chance. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true or to determine how probable it is that a suspect committed a crime, based on the evidence presented. The use of Bayesian probability raises the philosophical debate as to whether it can contribute valid justifications of belief. Bayesians point to the work of Ramsey[7] (p 182) and de Finetti[5] (p 103) as proving that subjective beliefs must follow the laws of probability if they are to be coherent.[15] Evidence casts doubt that humans will have coherent beliefs.[16][17] The use of Bayesian probability involves specifying a prior probability. This may be obtained from consideration of whether the required prior probability is greater or lesser than a reference probability[clarification needed] associated with an urn model or a thought experiment. The issue is that for a given problem, multiple thought experiments could apply, and choosing one is a matter of judgement: different people may assign different prior probabilities, known as the reference class problem. The "sunrise problem" provides an example.

Logical interpretations

The term "probability" is sometimes used in contexts where it has nothing to do with physical randomness. Consider, for example, the claim that the extinction of the dinosaurs was probably caused by a large meteorite hitting the earth. A hypothesis being "probably true" can been interpreted to mean that the (presently available) empirical evidence supports the hypothesis to a high degree. This degree of support has been called the logical, or epistemic, or inductive probability of the hypothesis given the evidence.

The differences between these interpretations are rather small, and may seem inconsequential. One of the main points of disagreement lies in the relation between probability and belief. Logical probabilities are conceived (for example in Keynes' Treatise on Probability[9]) to be objective, logical relations between propositions (or sentences), and hence not to depend in any way upon belief. They are degrees of (partial) entailment, or degrees of logical consequence, not degrees of belief. Ramsey, on the other hand, was skeptical about the existence of such objective logical relations and argued that (evidential) probability is "the logic of partial belief".[7] (p 157) In other words, Ramsey held that epistemic probabilities simply are degrees of rational belief, rather than being logical relations that merely constrain degrees of rational belief.

Another point of disagreement concerns the uniqueness of evidential probability, relative to a given state of knowledge. Carnap held, for example, that logical principles always determine a unique logical probability for any statement, relative to any body of evidence. Ramsey, by contrast, thought that while degrees of belief are subject to some rational constraints (such as, but not limited to, the axioms of probability) these constraints usually do not determine a unique value. In other words, according to Ramsey, rational people may differ somewhat in their degrees of belief, even if they all have the same information.

Physical interpretations

Physical probabilities, which are also called objective or frequency probabilities, are associated with random physical systems such as roulette wheels, rolling, and radioactive atoms. In such systems, a given type of event (such as a die yielding a six) tends to occur at a persistent rate, or "relative frequency", in a long run of trials. Physical probabilities either explain, or are invoked to explain, these stable frequencies. The two main kinds of theory of physical probability are frequentist accounts (such as those of Venn,[18] Reichenbach[19] and von Mises)[20] and propensity accounts (such as those of Popper, Miller, Giere and Fetzer).[12]

Frequency interpretations

For frequentists, the probability of the ball landing in any pocket can be determined only by repeated trials in which the observed result converges to the underlying probability in the long run.

Frequentists posit that the probability of an event is its relative frequency over time,[13] (3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by some random physical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (see determinism); or phenomena which are essentially unpredictable. Examples of the first kind include tossing dice or spinning a roulette wheel; an example of the second kind is radioactive decay. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity.

If we denote by the number of occurrences of an event in trials, then if we say that .

The frequentist view has its own problems. It is of course impossible to actually perform an infinite number of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular; see for example “What is the Chance of an Earthquake?[21]

Propensity interpretations

Propensity theorists think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind or to yield a long run relative frequency of such an outcome.[22] This kind of objective probability is sometimes called 'chance'.

Propensities, or chances, are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate given outcome types at persistent rates, which are known as propensities or chances. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives (see "single case possible" in the table above).[23] In contrast, a propensitist is able to use the law of large numbers to explain the behaviour of long-run frequencies. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will be close to the probability of heads on each single toss. This law allows that stable long-run frequencies are a manifestation of invariant single-case probabilities. In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of decay of a particular atom at a particular time.

The main challenge facing propensity theories is to say exactly what propensity means. (And then, of course, to show that propensity thus defined has the required properties.) At present, unfortunately, none of the well-recognised accounts of propensity comes close to meeting this challenge.

A propensity theory of probability was given by Charles Sanders Peirce.[24][25][26][27] A later propensity theory was proposed by philosopher Karl Popper, who had only slight acquaintance with the writings of C. S. Peirce, however.[24][25] Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions has propensity p of producing the outcome E means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which E occurred with limiting relative frequency p. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) only exist for genuinely nondeterministic experiments.

A number of other philosophers, including David Miller and Donald A. Gillies, have proposed propensity theories somewhat similar to Popper's.

Other propensity theorists (e.g. Ronald Giere[28]) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argued, for example, that physical magnitudes such as electrical charge cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science.

See also

References

Further reading

Related Articles

Wikiwand AI