On 4/12/10 2:59 AM, David Kirkman wrote:
OK, Sounds like a nice afternoon project. Here's a patch that does what I think you're asking in FaslReader and LispReader.
But I ran into some trouble testing the changes. Following the instructions at http://common-lisp.net/project/armedbear/contributing.shtml I end up with an OutOfMemoryError when running both test.ansi.interpreted and test.ansi.compiled
For most platforms we have tested ABCL on, it is expected to not be able to complete the ANSI tests without increasing the default stack size of the JVM. We haven't made this the default in the build process as there is large variability across platforms that makes a standard setting cause errors on some platforms immediately but prohibit the "right" amount of memory on others. And unfortunately we can't set memory programatically at runtime by querying what the platform limits would be. The only approach that makes sense
For the ANSI tests, I have found through experience on OSX, WinXP, and OpenSolaris with Java6, that one needs at least 512MB of stack. Through experimentation it seems that the tests should complete in around 300-500 seconds on contemporary x86 machines. If your tests take substantially longer, the JVM is probably starved for heap, and would complete the tests faster by playing with memory limits.
Is there an enviroment variable that needs to be set to get these to run? I can make it work by putting<jvmarg value="-Xmx500M"> lines in build.xml, which I've attached as a second patch.
For the Ant-based build, one sets theJava property 'java.options' in the 'abcl.properties' file to set Java options in the wrapper scripts. For example I use the following line for OSX as my standard configuration:
java.options=-Xms1g -Xmx4g
One could also pass this as something like ''-Djava.options="-Xms1g -Xmx4g"'' to the Ant process (not sure of the exact syntax here).
This should be documented better (my fault!)
When I ran the tests after I made the change, I got one additional error when running test.ansi.interpreted: PRINT.BACKQUOTE.RANDOM.14. I've also attached the error. I can't figure it out. I eventually decided that a 'random' test might give a different result if I run it again ... so I ran it again (and again and again) ... and the error did not happen again. So I now suspect that my patch does not introduce any new errors, but that PRINT.BACKQUOTE.RANDOM.14 fails randomly from time to time.
I have the same experience of random test failure here. The test does indeed randomly generate forms to test backquote, but doesn't record what input was used. Another task here would be to re-write the test to emit the tested form, run it enough times to get it to fail, and analyze the results to try to determine what aspect of it is failing.