Rice distribution

Probability distribution From Wikipedia, the free encyclopedia

In probability theory, the Rice distribution or Rician distribution (or, less commonly, Ricean distribution) is the probability distribution of the magnitude of a circularly symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice (19071986).

In the 2D plane, pick a fixed point at distance ν from the origin. Generate a distribution of 2D points centered around that point, where the x and y coordinates are chosen independently from a Gaussian distribution with standard deviation σ (blue region). If R is the distance from these points to the origin, then R has a Rice distribution.
Notation
Parameters , distance between the reference point and the center of the bivariate distribution,
, scale
Support
PDF
Quick facts Notation, Parameters ...
Probability density function
Rice probability density functions σ = 1.0
Cumulative distribution function
Rice cumulative distribution functions σ = 1.0
Notation
Parameters , distance between the reference point and the center of the bivariate distribution,
, scale
Support
PDF
CDF

where Q1 is the Marcum Q-function
Mean
Variance
Skewness (complicated)
Excess kurtosis (complicated)
Close

Characterization

The probability density function is

where I0(z) is the modified Bessel function of the first kind with order zero, and H(x) is the Heaviside unit step [1].

In the context of Rician fading, the distribution is often also rewritten using the shape parameter , defined as the ratio of the power contributions by line-of-sight path to the remaining multipaths, and the scale parameter , defined as the total power received in all paths.[2]

The characteristic function of the Rice distribution is given as:[3][4]

where is one of Horn's confluent hypergeometric functions with two variables and convergent for all finite values of and . It is given by:[5][6]

where

is the rising factorial.

Properties

Moments

The first few raw moments are:

and, in general, the raw moments are given by

Here denotes a Laguerre polynomial:

where is the confluent hypergeometric function of the first kind. When is even, the raw moments become simple polynomials in and , as in the examples above.

For the case :

The second central moment, the variance, is

Note that indicates the square of the Laguerre polynomial , not the generalized Laguerre polynomial .

  • if where and are statistically independent normal random variables and is any real number.
  • Another case where comes from the following steps:
    1. Generate having a Poisson distribution with parameter (also mean, for a Poisson) .
    2. Generate having a chi-squared distribution with degrees of freedom.
    3. Set
  • If then has a noncentral chi-squared distribution with two degrees of freedom and noncentrality parameter .
  • If then has a noncentral chi distribution with two degrees of freedom and noncentrality parameter .
  • If then , i.e., for the special case of the Rice distribution given by , the distribution becomes the Rayleigh distribution, for which the variance is .
  • If then has an exponential distribution.[7]
  • If then has an Inverse Rician distribution.[8]
  • The folded normal distribution is the univariate restriction of the Rice distribution.

Limiting cases

For large values of the argument, the Laguerre polynomial becomes[9]

It is seen that as becomes large or becomes small, the mean becomes and the variance becomes .

The transition to a Gaussian approximation proceeds as follows. From Bessel function theory we have

so, in the large region, an asymptotic expansion of the Rician distribution:

Moreover, when the density is concentrated around and because of the Gaussian exponent, we can also write and finally get the Normal approximation

The approximation becomes usable for .

Parameter estimation (Koay inversion technique)

There are three different methods for estimating the parameters of the Rice distribution, (1) method of moments,[10][11][12][13] (2) method of maximum likelihood,[10][11][12][14] and (3) method of least squares.[citation needed] In the first two methods the interest is in estimating the parameters of the distribution, and , from a sample of data. This can be done using the method of moments, e.g., the sample mean and the sample standard deviation. The sample mean is an estimate of and the sample standard deviation is an estimate of .

The following is an efficient method, known as the "Koay inversion technique".[15] for solving the estimating equations, based on the sample mean and the sample standard deviation, simultaneously . This inversion technique is also known as the fixed point formula of SNR. Earlier works[10][16] on the method of moments usually use a root-finding method to solve the problem, which is not efficient.

First, the ratio of the sample mean to the sample standard deviation is defined as , i.e., . The fixed point formula of SNR is expressed as

where is the ratio of the parameters, i.e., , and is given by:

where and are modified Bessel functions of the first kind.

Note that is a scaling factor of and is related to by:

To find the fixed point, , of , an initial solution is selected, , that is greater than the lower bound, which is and occurs when [15] (Notice that this is the of a Rayleigh distribution). This provides a starting point for the iteration, which uses functional composition,[clarification needed] and this continues until is less than some small positive value. Here, denotes the composition of the same function, , times. In practice, we associate the final for some integer as the fixed point, , i.e., .

Once the fixed point is found, the estimates and are found through the scaling function, , as follows:

and

To speed up the iteration even more, one can use the Newton's method of root-finding.[15] This particular approach is highly efficient.

Applications

See also

References

Further reading

Related Articles

Wikiwand AI