Peter Graves pointed me at the language/implementation efficiency shootout at http://www.ffconsultancy.com/languages/ray_tracer/benchmark.html which implements a ray tracer in a multitude of languagues, one of them being Common Lisp.
There are a number of rows, which I conviniently name version 1 (row 1) to version 5 (last row). Peter points out that version 1 isn't optized at all: all functions lead to function calls (no inlining).
When running this code, it takes around 1440 seconds on my PC. Then I modified the stack management routine to re-use available stack frames (and conses), instead of creating new ones on every call. This took execution time down to around 1240 seconds. A good improvement, I'd say.
However, with the change described above, stack frames won't ever become garbage anymore. My question is: does the loss of GC-ability outweigh the performance gain?
Please note that it's possible to run without a lisp stack (but that also means without traces) by compiling code with a (SPEED 3) declaration.
Comments?
Bye,
Erik.