Numerical error

From Wikipedia, the free encyclopedia

Numerical error is the error that arises in numerical computations due to limitations in representing real numbers and performing arithmetic operation on computers. These errors commonly appear in software engineering and mathematics.

Time series of the Tent map for the parameter m=2.0 which shows numerical error: "the plot of time series (plot of x variable with respect to number of iterations) stops fluctuating and no values are observed after n=50". Parameter m= 2.0, initial point is random.

Types

Numerical error in computations generally arises from several sources. The most common sources are round-off error and truncation error.[1]

It can be the combined effect of two kinds of error in a calculation. The first is referred to as round-off error and is caused by the finite precision of computations involving floating-point numbers. The second, usually called truncation error, is the difference between the exact mathematical solution and the approximate solution obtained when simplifications are made to the mathematical equations to make them more amenable to calculation.

Measure

Floating-point numerical error is often measured in ULP (unit in the last place), which represents the distance between two adjacent floating-point numbers.[2]

See also

References

Related Articles

Wikiwand AI