Dear Erik,
Thanks a lot. If I'm reading it correctly it definitely answers my question. The way I am reading it is as follows:
The numbers in the brackets represent actual times for the revision prior to revision 12918. The numbers outside the brackets represent numbers relative to the numbers in the brackets for the latest trunk branch. Therefor 1.0 outside the brackets means it took the same amount of time is the number indicated in the brackets. 2.0 would mean it took twice as long, 0.5 would meant it took have as much time, and so on. Is that correct?
If m interpretation is correct then there clearly isn't a runtime issue. This puts my questions to rest. Thank you very much!
Blake McBride
On Sun, Nov 14, 2010 at 5:22 AM, Erik Huelsmann ehuels@gmail.com wrote:
Hi Blake, others,
Running our cl-bench tests on Windows finally succeeded (but they did need some tweaking).
Below the results. The reference is the revision before the merge, the comparison (0.24) is a trunk revision of the last days; it's not exactly the comparison you were talking about, but it does seem like an indication.
Here are the results:
Armed Bear Common Lisp 0.23.0-dev Java 1.6.0_20 Sun Microsystems Inc. Java HotSpot(TM) Client VM Low-level initialization completed in 0.289 seconds. Startup completed in 3.552 seconds. Benchmark Reference Armed
SUM-PERMUTATIONS [ 2.31] 1.02 BOYER [ 3.66] 0.95 BROWSE [ 3.19] 0.19 DDERIV [ 0.99] 1.01 DERIV [ 0.71] 1.02 DESTRUCTIVE [ 1.03] 0.99 DIV2-TEST-1 [ 0.51] 1.03 DIV2-TEST-2 [ 2.02] 0.99 FFT [ 0.42] 1.12 FRPOLY/FIXNUM [ 1.56] 1.03 FRPOLY/BIGNUM [ 0.73] 1.02 FRPOLY/FLOAT [ 1.53] 1.02 PUZZLE [ 4.67] 1.02 TAK [ 7.70] 0.98 CTAK [ 20.68] 1.01 TRTAK [ 7.70] 1.0 TAKL [ 5.94] 0.97 STAK [ 10.92] 0.98 FPRINT/UGLY [ 2.69] 1.18 FPRINT/PRETTY [ 33.24] 1.01 TRAVERSE [ 20.80] 1.05 TRIANGLE [ 12.32] 1.06 RICHARDS [ 23.68] 1.04 FACTORIAL [ 0.42] 1.0 FIB [ 0.89] 1.0 FIB-RATIO [ 0.23] 1.12 ACKERMANN [ 31.31] 0.93 MANDELBROT/COMPLEX [ 0.57] 1.48 MANDELBROT/DFLOAT [ 0.07] 1.41 MRG32K3A [ 1.90] 1.11 CRC40 [ 19.55] 1.05 BIGNUM/ELEM-100-1000 [ 2.78] 1.14 BIGNUM/ELEM-1000-100 [ 0.77] 1.13 BIGNUM/ELEM-10000-1 [ 1.05] 1.21 BIGNUM/PARI-100-10 [ 0.05] 0.98 BIGNUM/PARI-200-5 [ 0.15] 0.99 PI-DECIMAL/SMALL [ 40.27] 1.07 PI-DECIMAL/BIG [ 87.37] 1.06 PI-ATAN [ 1.58] 1.04 PI-RATIOS [ 4.41] 1.04 HASH-STRINGS [ 1.72] 1.08 HASH-INTEGERS [ 1.38] 1.26 SLURP-LINES [ 0.00] 0.67 BOEHM-GC [ 9.29] 1.16 DEFLATE-FILE [ 12.08] 1.02 1D-ARRAYS [ 2.66] 1.17 2D-ARRAYS [ 13.33] 1.07 3D-ARRAYS [ 36.69] 1.06 BITVECTORS [ 3.49] 1.01 BENCH-STRINGS [ 21.73] 0.97 fill-strings/adjustable [ 11.19] 1.07 STRING-CONCAT [ 143.53] 0.95 SEARCH-SEQUENCE [ 2.28] 1.0 CLOS/defclass [ 1.0] 1.31 CLOS/defmethod [ 0.87] 1.37 CLOS/instantiate [ 42.83] 1.02 CLOS/simple-instantiate [ 143.36] 1.06 CLOS/methodcalls [ 8.87] 1.01 CLOS/method+after [ 5.36] 1.05 CLOS/complex-methods [ 3.55] 1.19 EQL-SPECIALIZED-FIB [ 1.40] 0.90 Reference time in first column is in seconds; other columns are relative Reference implementation: Armed Bear Common Lisp 0.23.0-dev Impl Armed: Armed Bear Common Lisp 0.24.0-dev === Test machine === Machine-type: X86 Machine-version: NIL
Bye,
Erik.
On Thu, Nov 11, 2010 at 11:22 AM, Erik Huelsmann ehuels@gmail.com wrote:
Hi Blake,
On Sat, Nov 6, 2010 at 7:11 PM, Blake McBride blake@mcbride.name wrote:
I'd really like to run a bench mark before and after the commit in question. I spent a short time trying to run the benchmarks a while back but was unsuccessful getting it to run. Can you help me with this?
Sure. What's your platform? If it's Linux/unix, getting the tests to run is relatively easy: make sure you have "make" and follow the instructions in the README file.
When you have Windows, it's a bit harder. This is what I do to run the tests on my windows machine:
<in the cl-bench root directory> <open support.lisp> <search #+win32, replace with #+(or win32 windows)> <search #-win32, replace with #-(or win32 windows)> <save, close> cd files copy *.lisp *.olisp cd .. abcl :ld do-compilation-script :exit abcl :ld do-execute-script
I hope the above works for you!
Bye,
Erik.
Thanks.
Blake
On Wed, Oct 6, 2010 at 4:25 PM, Blake McBride blake@mcbride.name wrote:
That's helpful. Thanks. So now we need to do benchmark / runtime tests.
Thanks.
Blake
On Wed, Oct 6, 2010 at 4:17 PM, Erik Huelsmann ehuels@gmail.com wrote:
Blake,
Before going to bed, I did a quick test - as discussed over GMail chat
- to see how much they differ in compilation times on other software.
timing new code, Maxima compilation: 223.416s, loading: 25.8 timing old code, Maxima compilation: 204.063s, loading: 29.174
I have no idea of the variation of the Maxima compilation times; it looks like the new code is 10% slower at compiling, but 20% more efficient at loading. However, these were single runs, so my conclusions may be way off, depending on the variations.
Bye,
Erik.
On Tue, Oct 5, 2010 at 5:52 PM, Blake McBride blake@mcbride.name wrote:
Greetings,
I hadn't built ABCL in a little while so I checked out the latest version today and built it. It seemed to be significantly slower than before so I decided to investigate. This is what I found.
In the past I could build ABCL in 2:43. It now takes me 4:40. That (IMO) represents a pretty significant change in build time. I did a binary search and discovered that all of the change occurred at revision 12918 - Generic Class File Branch Merge.
In general, I could't care less about the build time unless it is indicative of a problem that could rear its head in my application. Where is that time being spent? Is there a change in runtime? Loading? Compiling?
I'd be real interested in this.
Thanks.
Blake McBride
armedbear-devel mailing list armedbear-devel@common-lisp.net http://common-lisp.net/cgi-bin/mailman/listinfo/armedbear-devel