If I recall, I took into account IEEE 754 round-to-even rules to make upper and lower bounds for the float value, noting which were inclusive and which exclusive. Then I used continued fraction ideas to generate digits left-to-right with no need for correction.

I never looked in detail at the papers which used fixed precision, to see if they would generate the same decimal representations that I did. A quick skim indicated that they were pretty careful. 🤞



On Nov 14, 2023, at 6:54 AM, Bob Cassels <bobcassels@netscape.net> wrote:

Yes, that’s the Steele (and JonL) paper I was thinking of.


On Nov 13, 2023, at 2:16 AM, raymond.wiker@icloud.com wrote:

“How to Print Floating-Point Numbers Accurately”, perhaps?

https://lists.nongnu.org/archive/html/gcl-devel/2012-10/pdfkieTlklRzN.pdf

On 10 Nov 2023, at 12:37, bobcassels@netscape.net wrote:

I did the Symbolics Lisp Machine implementation of floating point print. Mine was completely accurate, using exact rational arithmetic with bignums. But not very efficient.

I have seen papers showing how to be accurate with fixed precision arithmetic. It requires great care. I believe that some C or C++ implementations use this procedure. So I would not be surprised if some implementation of floating point printing is incorrect.

I have a vague recollection of some paper by Guy Steele on the subject. I recall using ideas from Bill Gosper's continued fraction work.


On Nov 10, 2023 5:39 AM, Marco Antoniotti <marco.antoniotti@unimib.it> wrote:
Thank you.

I have not checked the details, but are you implying that clang printf is... buggy?  Knuth has such a caveat (about buggy printf implementations) on his page.

All the best

Marco


On Fri, Nov 10, 2023 at 11:35 AM <raymond.wiker@icloud.com> wrote:
I think that clang is simply printing more information than it is allowed (or supposed) to. For a double-precision IEEE754 float, the number of significant digits should be 

(floor (* 54.0d0 (/ (log 2.0d0) (log 10.0d0))))

which evaluates to 16 (53-bit mantissa + 1 hidden digit). The Lisp output has exactly 16 significant digits, while the clang output has 20.

The actual correct digits seem to be 

(ash (expt 10 52) -52)

which evaluates to 

2220446049250313080847263336181640625.





On 10 Nov 2023, at 09:55, Marco Antoniotti (as marco dot antoniotti at unimib dot it) <lisp-hug@lispworks.com> wrote:

Hi

Thanks Pascal.

For LW on Intel (Mac) the ULP seems the same.  With SBCL you should actually be able to peek at the actual bits making up the double float.  Can you do something similar with LM?

Just curious: has anybody tried this on a M*/Arm Mac?  Or, with LW, on your smartphone? :)

Cheers

MA


On Fri, Nov 10, 2023 at 8:29 AM Pascal Bourguignon (as pjb at informatimago dot com) <lisp-hug@lispworks.com> wrote:


On 9 Nov 2023, at 21:21, Marco Antoniotti <marco.antoniotti@unimib.it> wrote:

<problem-loop.lisp>


From the start, it looks like the ulp is more precise in C:


sbcl:   2.220446049250313d-16
clang: 2.2204460492503130808e-16

(using %.20g instead of %.20f)

Or perhaps it’s only the display procedure that truncates in lisp?

-- 
__Pascal J. Bourguignon__