Kazimir Majorinc:
formulas. So far so good. However, some essential operations on formulas (substitution, unification, rules or inference ...) require analysis of the formula.
Similar analysis of the function could be done only by transforming it back into formula with (caddr (function-lambda-expression Si)), and it is not simpler, even if it work, what is not guaranteed according to Hyperspec.
This example is confusion about phase separation (http://axisofeval.blogspot.com/2010/07/whats-phase-separation-and-when-do-yo...), but more relevantly it presents a false dichotomy for the solution space and perfectly demonstrates the power of macros.
There is no need to choose between EVAL and lambdas - make the formula operators be macros and the formulas will know how to analyze (macro-expansion time) *and* evaluate (result of macro-expansion) themselves - the formula interpreter falls out "for free" from doing a COMPILE on a lambda that binds the formula free variables. As a bonus, now the code is factored such that all the analysis code is grouped with the definition of the relevant operators, instead of being scattered around inside an interpreter.
A similar example came up in an ll1 mailing list discussion on macros vs closures (http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg02060.html - recommended reading as there is also discussion of good macros), but using SQL query languages as the domain. CLSQL, for example, does it with macros, which enable all kinds of compile[macro-expansion]-time analysis and optimization. Smalltalkers do the same thing with closures and using #doesNotUnderstand: (which is sort of like re-binding APPLY):
http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg02096.html
I wrote about how this technique might be used in Common Lisp:
http://carcaddar.blogspot.com/2009/04/closure-oriented-metaprogramming-via.h...
I think the macro approach to queries is more succinct and powerful (you are no longer limited to function calling syntax for your query DSL). The query DSL is something that a lot of people seem to like - LINQ in .NET attempts to do the same thing, but a much higher price (an new syntax for expressing queries, a new protocol for query consumers (database interfaces), and a horribly complicated implementation).
The other really powerful example pointed out in the ll1 discussion was SETF. I think SETF is one of the best examples of macros around. It takes the idea of accessors, and turns them into a generic idea of "places" that can be modified in the same way they are read.
This extends the language with an entirely new domain concept that works transparently everywhere, with absolutely no changes needed to any of its other parts. There are ways to fake "places" with closures (ex: Oleg Kiselyov's "pointers as closures" trick, or the way dynamic variables work in SRFI 39), but they work by completely changing the accessor protocol, introducing a second protocol for updating places (funcall the closure with new value), and because the "directions" to the place are not reified they are not composable (ie - you can't do (setf (gethash foo (aref bar)) baz) with closures), and cannot be implemented efficiently.
Now think about what it would take to do this in Java - you'd need a whole new DSL implemented on top of the Interpreter pattern, a Factory to help you produce the grammar of that DSL, all with a Strategy component so you can define new places. I bet it would be really pleasant to use as well.
Vladimir
Vladimir touches on two important things.
A lot of the value in Lisp macros is how simple they make certain things. People sometimes ask for an example of macros that "can't be done without them" - but that's a contradiction. By definition, anything that can be done by a macro can be done by its expansion. It can't, however, necessarily be done as easily. But this is a harder debate to have because "easy" gets so subjective so quickly.
Second, SETF is really apropos. One of the places I often find macros making things easier is in eliminating boilerplate code that involves assigning values to things. You can't do this with functions; or if you can, the cure is usually worse (more verbose) than the disease (the original repetition). It's usually very simple to write a macro to generates the block of code that does the assignments and whatever else goes with them. Yet I never seem to hear this brought up in discussions about macros - maybe because it's so obvious it doesn't seem to matter.
Dan Gackle
On Mon, Sep 27, 2010 at 3:04 PM, Vladimir Sedach vsedach@gmail.com wrote:
Kazimir Majorinc:
formulas. So far so good. However, some essential operations on formulas (substitution, unification, rules or inference ...) require analysis of the formula.
Similar analysis of the function could be done only by transforming it back into formula with (caddr (function-lambda-expression Si)), and it is not simpler, even if it work, what is not guaranteed according to Hyperspec.
This example is confusion about phase separation ( http://axisofeval.blogspot.com/2010/07/whats-phase-separation-and-when-do-yo... ), but more relevantly it presents a false dichotomy for the solution space and perfectly demonstrates the power of macros.
There is no need to choose between EVAL and lambdas - make the formula operators be macros and the formulas will know how to analyze (macro-expansion time) *and* evaluate (result of macro-expansion) themselves - the formula interpreter falls out "for free" from doing a COMPILE on a lambda that binds the formula free variables. As a bonus, now the code is factored such that all the analysis code is grouped with the definition of the relevant operators, instead of being scattered around inside an interpreter.
A similar example came up in an ll1 mailing list discussion on macros vs closures ( http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg02060.html
- recommended reading as there is also discussion of good macros), but
using SQL query languages as the domain. CLSQL, for example, does it with macros, which enable all kinds of compile[macro-expansion]-time analysis and optimization. Smalltalkers do the same thing with closures and using #doesNotUnderstand: (which is sort of like re-binding APPLY):
http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg02096.html
I wrote about how this technique might be used in Common Lisp:
http://carcaddar.blogspot.com/2009/04/closure-oriented-metaprogramming-via.h...
I think the macro approach to queries is more succinct and powerful (you are no longer limited to function calling syntax for your query DSL). The query DSL is something that a lot of people seem to like - LINQ in .NET attempts to do the same thing, but a much higher price (an new syntax for expressing queries, a new protocol for query consumers (database interfaces), and a horribly complicated implementation).
The other really powerful example pointed out in the ll1 discussion was SETF. I think SETF is one of the best examples of macros around. It takes the idea of accessors, and turns them into a generic idea of "places" that can be modified in the same way they are read.
This extends the language with an entirely new domain concept that works transparently everywhere, with absolutely no changes needed to any of its other parts. There are ways to fake "places" with closures (ex: Oleg Kiselyov's "pointers as closures" trick, or the way dynamic variables work in SRFI 39), but they work by completely changing the accessor protocol, introducing a second protocol for updating places (funcall the closure with new value), and because the "directions" to the place are not reified they are not composable (ie - you can't do (setf (gethash foo (aref bar)) baz) with closures), and cannot be implemented efficiently.
Now think about what it would take to do this in Java - you'd need a whole new DSL implemented on top of the Interpreter pattern, a Factory to help you produce the grammar of that DSL, all with a Strategy component so you can define new places. I bet it would be really pleasant to use as well.
Vladimir
pro mailing list pro@common-lisp.net http://common-lisp.net/cgi-bin/mailman/listinfo/pro
On 28 Sep 2010, at 07:17, Daniel Gackle wrote:
A lot of the value in Lisp macros is how simple they make certain things. People sometimes ask for an example of macros that "can't be done without them" - but that's a contradiction. By definition, anything that can be done by a macro can be done by its expansion. It can't, however, necessarily be done as easily. But this is a harder debate to have because "easy" gets so subjective so quickly.
I think the benefits of macro can be stated a bit more objectively.
One first has to agree on the idea that abstraction is a good idea. Abstraction is a good idea for at least two reasons:
+ It makes it easier to see what is actually going on in the code, without being distracted by unnecessary implementation details. + It makes it easier to change implementation details without affecting other parts in the code.
Consider a simple functional abstraction, a sort routine. If you can say (sort this-list), then it's pretty clear from reading this invocation what the programmer wants to do here. On top of that, a programmer can change his/her mind about the details of the sorting routine (whether it's a stable sort or not, etc.), without affecting the invocation sites. You can even change the implementation more dynamically, by way of using generic functions. [1]
Macros are good to provide syntactic abstractions, that is, abstractions that cannot be expressed easily using more conventional abstraction mechanisms, like functional or object-oriented ones. My favorite example for this, so far, is a while macro. Consider the following definition for a while _function_:
(defun while/f (predicate body) (when (funcall predicate) (funcall body) (while/f predicate body)))
Expressing a while construct like that is suggested in languages that provide functional abstractions, but no syntactic abstractions. It can be used as follows:
(while/f (lambda () (< x 10)) (lambda () (print x) (incf x)))
This a bit wordier than in other languages (such as ML, Haskell, Smalltalk, Ruby, etc.). But the issue here is not that 'lambda is a keyword that happens to be a few characters too long. The real issue here is that while/f is a leaky abstraction: It exposes an internal implementation detail, namely that it uses closures to do its job. As a user of while/f, you have to remember that while/f expects you to pass closures, and as a user, you also have a pretty good, rough idea how while/f is implemented internally by just looking at the interface.
A macro can help to abstract away that internal implementation detail, for example as a wrapper around the functional while/f:
(defmacro (predicate-expression &body body-expressions) `(while/f (lambda() ,predicate-expression) (lambda () ,@body-expressions))
Now you can use it as follows:
(while (< x 10) (print x) (incf x))
This version is much improved with regard to the two criteria mentioned above: 1) The code is now clearer and easier to understand, because you are not distracted by the lambdas, which are totally unnecessary from a user perspective. On top of that, 2) it's now easier to change your mind about the internal implementation details. For example, you can decide to remove the use of closures completely (say, because you identified them as a performance bottleneck ;):
(defmacro while (predicate-expression &body body-expressions) (let ((begin (gensym)) (end (gensym))) `(tagbody ,begin (unless ,predicate-expression (go ,end)) ,@body-expressions (go ,begin) ,end)))
With this slightly optimized version, there is no need to change the invocation site at all. This is at least very hard to achieve with just functional abstractions, and depending on language probably impossible. [2]
So, to the best of my knowledge, that's the main reason why macros are beneficial, and that's the shortest example I can think of to illustrate that point. Just like any abstraction mechanism, macros pay off a lot more in large programs. (This is an important point here: Functional abstractions and object-oriented abstractions, to name just two popular ones, also only pay off in large programs!)
Pascal
[1] That's the true benefit of object-oriented programming. It has nothing to do with inheritance, encapsulation, modeling the "real" world in terms of objects, etc. It simply boils down to being able to choose different implementations for method/function signatures based on runtime criteria in a way that is slightly smarter than plain if/cond/case statements. [2] One could imagine some facility for intercepting a compiler to perform custom low-level optimizations, but that is probably a lot messier than a macro mechanism.
Daniel Gackle wrote:
Vladimir touches on two important things.
A lot of the value in Lisp macros is how simple they make certain things. People sometimes ask for an example of macros that "can't be done without them" - but that's a contradiction. By definition, anything that can be done by a macro can be done by its expansion. It can't, however, necessarily be done as easily. But this is a harder debate to have because "easy" gets so subjective so quickly.
I would say that the real point is that the macro is a new way to separate concerns, or a new way to do abstraction. Just as object-oriented programming (and aspect-oriented programming) are ways of doing abstraction that are useful in specific circumstances, so are macros. There are kinds of abstraction that are not clear, or not possible, otherwise.
Second, SETF is really apropos. One of the places I often find macros making things easier is in eliminating boilerplate code that involves assigning values to things.
I agree, but would go farther. It eliminates all kinds of boilerplate.
I read Java code written by Java experts and so much of it is boilerplate, such as Java Beans with lots of get and set functions and nothing happening. When I point this out, they say, oh, no problem, Eclipse just writes those for you. Well, when they enhance Eclipse so that it reads them for you too, OK, but then you have macros for all real purposes.
-- Dan