Hi,
I've added the existing microbenchmark code to the "test" directory of
the repository, and cleaned it up a little.
(This infrastructure is best benchmark operations in the microsecond to
millisecond range. Real-world "macrobenchmarks" should probably be done
differently.)
The instructions are at the top:
;;;; Evaluate
;;;; (qt::microbench)
;;;; to run these benchmarks on an otherwise idle computer. Results are
;;;; written to the REPL, and in a machine readable format also dribbled
;;;; to files. Files names are, by default, of the form <lisp
;;;; implementation type>.txt.
;;;;
;;;; Notes:
;;;; 1. These are microbenchmarks meant to aid understanding of the
;;;; implementation. They do not necessarily reflect overall or
;;;; real-world performance.
;;;; 2. Since each individual operation is too fast to benchmark, we
;;;; invoke them a large number of times and compute the average run
;;;; time afterwards.
;;;; 3. Before running benchmarks, we choose a repetition time depending
;;;; on how fast (or slow) a simple test case is, so that slow Lisps
;;;; don't waste endless time running benchmarks.
;;;; 4. Benchmarks are run three times, and only the best run of those
;;;; three is reported, to account for issues with background activity
;;;; on the computer ruining the results.
;;;; 5. But you should _still_ run the overall benchmarks several times
;;;; and see how reproducible the numbers are.
;;;;
;;;; There's no tool to parse the output files and drawn graphs yet, but
;;;; there should be. (READ-MICROBENCH-RESULTS already fetches the raw
;;;; sexps from each file though, just to check that they are READable).
d.