On Fri, Dec 31, 2010 at 8:47 AM, Erik Huelsmann ehuels@gmail.com wrote:
Hi Alan,
On Fri, Dec 31, 2010 at 4:30 AM, Alan Ruttenberg alanruttenberg@gmail.com wrote:
a) I don't know internals well enough to decipher the proposal.
That's no problem, I'll gladly answer any questions that you may have. I was really deep into the details when writing the proposal, so, I probably missed some of the context when trying to write down the higher level issue.
Is allocation on the stack a performance optimization?
Yes and no. It's a logical consequence of the structure of the JVM: doesn't have registers like most CPUs, meaning that the only way to pass operands to the JVM instructions is by pushing them onto the stack. Equally, all operations return values on the stack, if any. This is true for all operations, including function calls.
However, the choice to actually leave the values on the stack instead of saving them to local variables and reloading them later, could be considered an optimization. Saving to local vars - if taken to its extremest - would lead to the following byte code as the compiled version of "(if (eq val1 val2) :good :bad)", assuming val1 and val2 are local variables (let bindings):
aload 1 astore 3 aload 2 astore 4 aload 3 aload 4 ifacmp :label1 getstatic ThisClass.SYMBOL_BAD astore 5 goto :label2 :label1 getstatic ThisClass.SYMBOL_GOOD astore 5 :label2 aload 5
Currently the same code compiles to:
aload 1 aload 2 ifacmp :label1 getstatic ThisClass.SYMBOL_BAD goto label2 :label1 getstatic ThisClass.SYMBOL_GOOD :label2
Although I admit that I have no idea about the cost of performance, I'd estimate a significant growth in the size of our JAR file at least.
If so has any metering been done to see whether it actually impacts performance?
No (because I haven't perceived it as a performance optimization per se, but as a size optimization too).
Not sure that small jar size is an important consideration. However the jvm has 65k bytecode limit - does abcl have a workaround for that? If yes, then doing the experiment (no optimization, profile) might be worth it. I expect that the JIT would do peephole optimization and eliminate the performance issue. (though http://nerds-central.blogspot.com/2009/09/tuning-jvm-for-unusual-uses-have-s... seems worth having a look at). Even if not, it might be worth handling this in a separate optimization phase within abcl's compiler.
If you would like opinions from me and perhaps others perhaps you could say a few more words about what the safety issue is? When is the JVM stack cleared?
The JVM clears the stack when it enters an exception handler (in Java typically a catch{} or finally {} block). This is by itself not necessarily a problem: LET/LET* and PROGV just rethrow the exception after unbinding the specials they might have bound.
The text that Ville pointed me out doesn't mention this. I'm looking at http://java.sun.com/docs/books/jvms/second_edition/html/Concepts.doc.html#22... to try to understand better. Still don't get. Are you referring to "When an exception is thrown, control is transferred from the code that caused the exception to the nearest dynamically enclosing catch clause of a try statement that handles the exception."? this would imply unwinding the stack. Interested in the response to Ville's message.
However, TAGBODY/GO, BLOCK/RETURN and CATCH/THROW are constructs which catch exceptions in the Java world and continue processing in the current frame. This is a problem: if any values had been collected on the stack before the TAGBODY form, they've now "disappeared" in some of the code-paths. Most of the negative effects have been eliminated by rewriting code into constructs which don't cause the same behaviour, so, normally, you shouldn't see this happening.
b) If you are going to be doing thinking about compiler architecture I would very much like to see some thought going into debuggability. My current impression is that the trend is downward with more and more cases of not even being able to see function call arguments in slime. For example, at this point in ABCL's development, I think having a compiler option that pays performance for the ability to view local variables in the debugger should be an increasing priority.
This compiler option should be hooked to the OPTIMIZE DEBUG declaration, if you ask me.
Sounds right.
Other's mileage may vary, but in my case, where the bulk of time is spent in java libraries of various sorts, improving lisp performance is a distinctly lower priority than improving developer productivity by making it easier to debug.
Thanks! This is very valuable feedback. It's often very hard to determine what next steps to take; it's easy to focus on performance, since it's very well measurable. However, performance isn't the only thing which influences the development cycle. It'd be great to discuss the kind of things ABCL would need to add for debugability using a number of real-world problems: it'll make the problem much more tangible (at least to me).
Mark named a few. For me the current bottleneck is visibility - in an exception I can't see where I am and what's going on. I'd like to know where in my code I am, and what the state of variables is. Another one would be beefing up trace and adding an advise facility. One wants to do things like trace methods, to have code run before or after, etc. Can supply more details if desired, but the description in http://ccl.clozure.com/ccl-documentation.html is what I am familiar with.
Glad to elaborate further. I'll respond to Mark's message.
-Alan