Thank you Raymond.
Yes I am aware of the paper and of the vagaries of printing floats FROM binary. I was, at this point, going in the reverse: printing the binary representation of a float.
Having said that, I love the sentence in the paper "we did not think it was such a big deal". It looks like it still is 🙄
Also, an ode to Common Lisp (preaching to the choir): the code I wrote is, IMHO, somewhat more portable than the C versions I saw around, thanks to all the float introspection functions and the (let's say it! underdocumented) DECODE-FLOAT and INTEGER-DECODE-FLOAT functions.
All the best
Marco
On Mon, Nov 13, 2023 at 8:17 AM raymond.wiker@icloud.com wrote:
“How to Print Floating-Point Numbers Accurately”, perhaps?
https://lists.nongnu.org/archive/html/gcl-devel/2012-10/pdfkieTlklRzN.pdf
On 10 Nov 2023, at 12:37, bobcassels@netscape.net wrote:
I did the Symbolics Lisp Machine implementation of floating point print. Mine was completely accurate, using exact rational arithmetic with bignums. But not very efficient.
I have seen papers showing how to be accurate with fixed precision arithmetic. It requires great care. I believe that some C or C++ implementations use this procedure. So I would not be surprised if some implementation of floating point printing is incorrect.
I have a vague recollection of some paper by Guy Steele on the subject. I recall using ideas from Bill Gosper's continued fraction work.
On Nov 10, 2023 5:39 AM, Marco Antoniotti marco.antoniotti@unimib.it wrote:
Thank you.
I have not checked the details, but are you implying that clang printf is... buggy? Knuth has such a caveat (about buggy printf implementations) on his page.
All the best
Marco
On Fri, Nov 10, 2023 at 11:35 AM raymond.wiker@icloud.com wrote:
I think that clang is simply printing more information than it is allowed (or supposed) to. For a double-precision IEEE754 float, the number of significant digits should be
(floor (* 54.0d0 (/ (log 2.0d0) (log 10.0d0))))
which evaluates to 16 (53-bit mantissa + 1 hidden digit). The Lisp output has exactly 16 significant digits, while the clang output has 20.
The actual correct digits seem to be
(ash (expt 10 52) -52)
which evaluates to
2220446049250313080847263336181640625.
On 10 Nov 2023, at 09:55, Marco Antoniotti (as marco dot antoniotti at unimib dot it) lisp-hug@lispworks.com wrote:
Hi
Thanks Pascal.
For LW on Intel (Mac) the ULP seems the same. With SBCL you should actually be able to peek at the actual bits making up the double float. Can you do something similar with LM?
Just curious: has anybody tried this on a M*/Arm Mac? Or, with LW, on your smartphone? :)
Cheers
MA
On Fri, Nov 10, 2023 at 8:29 AM Pascal Bourguignon (as pjb at informatimago dot com) lisp-hug@lispworks.com wrote:
On 9 Nov 2023, at 21:21, Marco Antoniotti marco.antoniotti@unimib.it wrote:
<problem-loop.lisp>
From the start, it looks like the ulp is more precise in C:
sbcl: 2.220446049250313d-16 clang: 2.2204460492503130808e-16
(using %.20g instead of %.20f)
Or perhaps it’s only the display procedure that truncates in lisp?
-- __Pascal J. Bourguignon__