Talk:Arbitrary-precision arithmetic
From Wikipedia, the free encyclopedia
| This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||
| ||||||||||||||||||||||||
Oldest comments
Didn't you want to talk about big floating-point numbers ? Are some people interessed ?
I moved the following HTML comments from the article source over to this talk page. They may be more useful here:
- <!-- Please give known applications to specific problems, rather than making vague statements that bignum arithmetic is used in [[theoretical physics]] etc. -->
- <!-- TODO: mention division algorithms, and hence square root etcetera. Mention arithmetic-geometric mean algorithms for computing e^x, trig functions, pi, etcetera. -->
—Herbee 20:57, 23 April 2006 (UTC)
The newly-added Infinite Precision section.
Shouldn't this be arbitrary precision as well? I'm not familiar with the work being referenced, but I'm fairly certain that there is no such animal. Trying to get infinite precision off of 4/3 * pi just isn't happening on computers with finite memory any time soon. SnowFire 16:16, 22 May 2006 (UTC)
- I agree. It's the usual confusion between "arbitrarily large" and "infinite". —Steven G. Johnson 16:45, 22 May 2006 (UTC)
- The text basically describes symbolic algebra. Fredrik Johansson 16:52, 22 May 2006 (UTC)
- You're right, upon closer reading symbolic algebra seems to be what was intended by the text. It's still misleading to call it "infinite precision", however, since (a) the symbolic expressions have arbitrary (limited by memory) but finite size, which directly corresponds to a form of finite precision, and (b) you're not really doing "arithmetic" until you calculate the numeric result, and that is done with arbitrary but finite precision. Besides, if you Google for "infinite precision arithmetic", it seems that this term is mainly used as a synonym for arbitrary precision. —Steven G. Johnson 16:59, 22 May 2006 (UTC)
I agree with all of the above given suitable context, but not with the text in the article which reads "...infinite-precision arithmetic, which is something of a misnomer: the number of digits of precision always remains finite...". I would argue that any representation for which there is a bijective mapping onto the set being modelled is precise. It's not infinite precision that's the problem, it's trying to represent elements of an infinite set on a finite state machine. IMHO, it is not an error to describe bignum arithmetic as "infinite precision" if it is operating over a finite set, an obvious example being the integers modulo N, for not-extremely-large N. --Will
- Sure, and a one-bit integer is "infinite precision" in your sense, as long as you are only trying to represent the set {0,1}. As far as I can tell, however, that's simply not the way the term "precision" and especially "infinite precision" is used in the setting of numeric computation. In this context, the "precision" of a number is the number of significant digits that are specified (or some equivalent measure of information content). (Yes, the general English meaning is more vague than that...most English words are, but that doesn't prevent them from having a more, ahem, precise meaning in a technical context.) I think the meaning that you are describing would be better termed "perfect precision". —Steven G. Johnson 04:03, 28 May 2006 (UTC)
- If a number is represented in a format which provides n base-d digits of precision (even this makes unnecessary assumptions about representation), then the error is proportional to d^-n. Thus n = -log(error), with n going to infinity as error goes to zero. I would have said that if the error is exactly zero for all operations and for all operands, then the precision is infinite. This is applicable to the modular arithmetic case (including the n=1 case you mentioned), but not for general symbolic algebra systems, where the size of the expression tree is artificially bounded by the machine's available state; I therefore don't agree with the section which keeps appearing. I'm not saying that "infinite precision" is a more appropriate term than "arbitrary precision" for finite number systems, merely that it's not inaccurate. But on reflection perhaps I was misinterpreting the sentence; which on re-reading is perhaps only dismissing it as a misnomer when used as a synonym for arbitrary precision. --Will (86.141.151.165 11:06, 28 May 2006 (UTC))
- Be careful not to confuse accuracy/error with precision. 3.000000000000000 is very precise, but it is not a very accurate representation of, say, π. —Steven G. Johnson 18:54, 28 May 2006 (UTC)
You are wrong. 4/3*pi is a computable number, thus it can be represented in finite manner in the memory of the computer. There is an example in the references section. Sounds like you don't know what you're talking about. Grue 06:17, 28 May 2006 (UTC)
- It's the same thing that happens if I type 4/3*Pi in Mathematica. Symbolic algebra: expressions evaluate to expressions, with pointers to code to compute the atomic elements numerically with any desired precision. The only difference between Mathematica and the Lisp library is that the latter shows a numerical value instead of the underlying expression in the REPL. Fredrik Johansson 09:43, 28 May 2006 (UTC)
- The set of computable reals is infinitely large. The set of states for a machine running mathematica (or any similar package) is finite. Therefore there exist pairs of distinct computable numbers which have identical representation within the machine. Where this happens, there is nonzero error. You can't have infinite precision if there's error. --Will (86.141.151.165 11:16, 28 May 2006 (UTC))
- No, some computable numbers are just too complicated to fit in memory and they can't be represented. So, only finite set of different computable numbers fits in any given computer. As for symbolic computation, I think there are some noticeable differences. In case of π, there is a symbol for a number, but many computable numbers don't have any symbol nor can be represented as a finite composition of arithmetic operations on numbers that have a symbol (there are also numbers that have a symbol, but are not computable, like Chaitin's constant). Grue 12:44, 28 May 2006 (UTC)
- Grue, this article is about arithmetic and that means computation of a numeric result. By definition, every computable number can be represented by a finite-length (though possibly very large) computer program that computes it to any arbitrary (but not infinite) precision in finite time. The representation of a number by a computer program is perfectly precise in the sense of determining a given real number uniquely, but does not provide infinite-precision arithmetic. —Steven G. Johnson 19:15, 28 May 2006 (UTC)
- No, some computable numbers are just too complicated to fit in memory and they can't be represented. So, only finite set of different computable numbers fits in any given computer. As for symbolic computation, I think there are some noticeable differences. In case of π, there is a symbol for a number, but many computable numbers don't have any symbol nor can be represented as a finite composition of arithmetic operations on numbers that have a symbol (there are also numbers that have a symbol, but are not computable, like Chaitin's constant). Grue 12:44, 28 May 2006 (UTC)
Table needs cleanup
The table of arbitrary precision libraries needs improvement. For one thing, "Number Type" for several of the libraries does not actually list what number types are provided. It would also be useful to include a short overview of what features each library provides in addition to just the number types. For example, GMP implementing highly optimized low level operations, MPFR providing correctly rounded transcendental functions, etc. Fredrik Johansson 13:46, 3 October 2008 (UTC)
- Those sound like useful suggestions, but I would refrain from saying that one library or another is "highly optimized" in the absence of benchmarks comparing several of the available libraries. When someone claims that their code is "highly optimized" it usually means "I made it much faster than my first implementation," which doesn't mean that there isn't a completely different implementation that is much faster. Put in terms of Wikipedia policy, a project's web site claiming that it is fast is not a reputable source unless they have published benchmarks backing them up. —Steven G. Johnson (talk) 16:35, 3 October 2008 (UTC)
Proposed addition to list of arbitrary-precision libraries
I have an arbitrary-precision library I'd like to add to the list of arbitrary-precision libraries, but it would be a conflict of interest to add it myself, so I'm proposing that someone else add it, per the instructions here:
The list I'm referring to is the table toward the end, where the first item in the list is "apfloat".
My understanding is that it's in the general Wikipedia interest for the list to be more complete than less, and that therefore any useful arbitrary-precision library should be included.
Also, the library I'd like to add to the list, xlPrecision, is unique in that it is designed to be easy and intuitive for Office VBA programmers (especially Excel VBA programmers) who may have no knowledge of C, C++, Java, or any of the other languages reperesented in the list, and who therefore might have difficulty using the other libraries, at least in the short term.
Here are the lines I'd like to see added to the list:
|xlPrecision |Decimal |VBA |Commercial
(Sorry, I'm not sure how to format those lines to appear correctly. They way they appear in "edit this page" is the way they would be added to the table.)
Thanks, and let me know if you have any questions for me.
Greg (talk) 02:20, 22 February 2009 (UTC)
W3bbo's edits
W3bbo just now changed several mentions of "floats" to "decimals". This is incorrect, as most arbitrary-precision libraries use binary, not decimal, floats. Binary and decimal arithmetic have very different characteristics and purposes. Using "decimal" generically for floating-point numbers is sloppy, especially so for an article on a topic like this one. Fredrik Johansson 11:26, 4 September 2009 (UTC)
- I agree. The term "decimals" implies that the calculations occur using decimal arithmetic. I suspect that
mostmany Arbitrary-precision packages use a binary representation (mantissa plus exponent) and only support decimal for input/output purposes. I support changing the references to "decimals" back to "floats". If some packages are shown to actually use decimal arithmetic internally, the term "decimals" would be appropriate for those. -- Tcncv (talk) 23:36, 6 September 2009 (UTC)
- As a follow-up, I took a look at a quick look at a half-dozen packages and based on a scan of their web pages, documentation or source code, I found three that appear to use a decimal-power radix (apfloat, MAPM, and W3b.Sine) and three that use binary (ARPREC, mpmath, TTMath). While this is only a small sample, it does indicate a mix or representations, enough to justify undoing W3bbo's change. The original term "floats" is radix neutral, and if someone has time to research all of the packages, the more precise terms "decimal floats" or "binary floats" could be used. -- Tcncv (talk) 01:24, 7 September 2009 (UTC)
- I made an attempt as updating the number types. It probably still needs work. Some of the long descriptions have made me consider if it might be better to break the number type description into several columns. One column could list the general representation (integer, float, rational, real), another could simply indicate built-in complex number space support (yes or no), and a third could indicate the internal radix used (decimal or binary). That might simplify descriptions like "decimal float and decimal complex float", while retaining all of the information. -- Tcncv (talk) 02:23, 7 September 2009 (UTC)
- I suppose decimals suggests decimal arithmetic, but I think implies is too strong. Fortran, for example, uses SELECTED_REAL_KIND to specify desired precision in decimal digits. It does this without suggesting that the underlying arithmetic system is decimal. (More recently, one is allowed to specify the radix, but precision is still in decimal digits.) PL/I allows specification of precision in either binary or decimal digits, again without specifying the underlying radix, though on machines with float decimal hardware, it is a convenient way to ask for it. In the fixed point case, PL/I allows for a binary or decimal scaling factor, which again it does without specifying the underlying radix. This can result in much multiply and divide by powers of ten, when not in decimal. Gah4 (talk) 23:50, 29 August 2019 (UTC)
- That only selects from the available fixed-length precisions. It doesn't let you set it to whatever you need. Bubba73 You talkin' to me? 00:30, 30 August 2019 (UTC)
- I suppose decimals suggests decimal arithmetic, but I think implies is too strong. Fortran, for example, uses SELECTED_REAL_KIND to specify desired precision in decimal digits. It does this without suggesting that the underlying arithmetic system is decimal. (More recently, one is allowed to specify the radix, but precision is still in decimal digits.) PL/I allows specification of precision in either binary or decimal digits, again without specifying the underlying radix, though on machines with float decimal hardware, it is a convenient way to ask for it. In the fixed point case, PL/I allows for a binary or decimal scaling factor, which again it does without specifying the underlying radix. This can result in much multiply and divide by powers of ten, when not in decimal. Gah4 (talk) 23:50, 29 August 2019 (UTC)
- Yes. But also, there is no restriction on the available precisions. In the fixed decimal case, IBM allows odd digits from 1 to 15. PL/I in the binary case takes the digits you specify, multiplies by 3.32, and rounds up to the next available size. Fortran allows for any integer radix greater than one. Specifying in decimal digits isn't too restrictive in most cases, even when the radix isn't 10. Gah4 (talk) 04:38, 30 August 2019 (UTC)
- I haven't use Fortran since Fortran 77, so I haven't used a version with this feature. But I read about it, and it sounds like you put in what precision you need, and it picks the first one that is built into the system that meets or exceeds it. If you ask for a precision higher than a native one, it returns an error (from what I read). Bubba73 You talkin' to me? 16:58, 30 August 2019 (UTC)
