Bayesian econometrics

From Wikipedia, the free encyclopedia

Bayesian econometrics is a branch of econometrics which applies Bayesian principles to economic modelling. Bayesianism is based on a degree-of-belief interpretation of probability, as opposed to a relative-frequency interpretation.

The Bayesian principle relies on Bayes' theorem which states that the probability of B conditional on A is the ratio of joint probability of A and B divided by probability of B. Bayesian econometricians assume that coefficients in the model have prior distributions.

This approach was first propagated by Arnold Zellner.[1]

Subjective probabilities have to satisfy the standard axioms of probability theory if one wishes to avoid losing a bet regardless of the outcome.[2] Before the data is observed, the parameter is regarded as an unknown quantity and thus random variable, which is assigned a prior distribution with . Bayesian analysis concentrates on the inference of the posterior distribution , i.e. the distribution of the random variable conditional on the observation of the discrete data . The posterior density function can be computed based on Bayes' theorem:

where , yielding a normalized probability function. For continuous data , this corresponds to:

where and which is the centerpiece of Bayesian statistics and econometrics. It has the following components:

  • : the posterior density function of ;
  • : the likelihood function, i.e. the density function for the observed data when the parameter value is ;
  • : the prior distribution of ;
  • : the probability density function of .

The posterior function is given by , i.e., the posterior function is proportional to the product of the likelihood function and the prior distribution, and can be understood as a method of updating information, with the difference between and being the information gain concerning after observing new data. The choice of the prior distribution is used to impose restrictions on , e.g. , with the beta distribution as a common choice due to (i) being defined between 0 and 1, (ii) being able to produce a variety of shapes, and (iii) yielding a posterior distribution of the standard form if combined with the likelihood function . Based on the properties of the beta distribution, an ever-larger sample size implies that the mean of the posterior distribution approximates the maximum likelihood estimator The assumed form of the likelihood function is part of the prior information and has to be justified. Different distributional assumptions can be compared using posterior odds ratios if a priori grounds fail to provide a clear choice. Commonly assumed forms include the beta distribution, the gamma distribution, and the uniform distribution, among others. If the model contains multiple parameters, the parameter can be redefined as a vector. Applying probability theory to that vector of parameters yields the marginal and conditional distributions of individual parameters or parameter groups. If data generation is sequential, Bayesian principles imply that the posterior distribution for the parameter based on new evidence will be proportional to the product of the likelihood for the new data, given previous data and the parameter, and the posterior distribution for the parameter, given the old data, which provides an intuitive way of allowing new information to influence beliefs about a parameter through Bayesian updating. If the sample size is large, (i) the prior distribution plays a relatively small role in determining the posterior distribution, (ii) the posterior distribution converges to a degenerate distribution at the true value of the parameter, and (iii) the posterior distribution is approximately normally distributed with mean .

History

Current research topics

References

Related Articles

Wikiwand AI