Separation (statistics)

From Wikipedia, the free encyclopedia

In statistics, separation is a phenomenon associated with models for dichotomous or categorical outcomes, including logistic and probit regression. Separation occurs if the predictor (or a linear combination of some subset of the predictors) is associated with only one outcome value when the predictor range is split at a certain value.

For example, if the predictor X is continuous, and the outcome y = 1 for all observed x > 2. If the outcome values are (seemingly) perfectly determined by the predictor (e.g., y = 0 when x  2) then the condition "complete separation" is said to occur. If instead there is some overlap (e.g., y = 0 when x < 2, but y has observed values of 0 and 1 when x = 2) then "quasi-complete separation" occurs. A 2 × 2 table with an empty (zero) cell is an example of quasi-complete separation.

The problem

This observed form of the data is important because it sometimes causes problems with the estimation of regression coefficients. For example, maximum likelihood (ML) estimation relies on maximization of the likelihood function, where e.g. in case of a logistic regression with completely separated data the maximum appears at the parameter space's margin, leading to "infinite" estimates, and, along with that, to problems with providing sensible standard errors.[1][2] Statistical software will often output an arbitrarily large parameter estimate with a very large standard error.[3]

Possible remedies

References

Further reading

Related Articles

Wikiwand AI