On Tue, Dec 4, 2012 at 3:19 AM, Liam Healy lnp@healy.washington.dc.us wrote:
On Mon, Dec 3, 2012 at 8:12 AM, Marco Antoniotti antoniotti.marco@disco.unimib.it wrote:
Dear all,
I was fooling around with reader macros to implement some - let's say - "tuple" syntax.
Now, I am just curious about the general thinking on this.
First of all, let's note that the problem here in the conflating of the "constructor" with the "printed representation" of an item. I.e., in Matlab you say
[1, 2, 40 + 2]
ans = 1 2 42
In CL, doing the simple thing, you get
cl-prompt> [1 2 (+ 40 2)] [1 2 42]
but
cl-prompt> #(1 2 (+ 40 2)) #(1 2 (+ 40 2))
So, suppose you have your [ … ] reader macro, would you have it work "functionally" or "quoting-ly" (for want of a better word)? I am curious. Note that the Matlab-style version would not break referential transparency if you did not have mutations.
I struggled with this very question in Antik (grid syntax with #m). What I decided was it should mimic CL #(, i.e., not evaluate. For evaluation, I have function(s), particularly #'grid, so
ANTIK-USER> (setf *default-grid-type* 'foreign-array) FOREIGN-ARRAY ANTIK-USER> (grid 1 2 (+ 40 2)) #m(1.000000000000000d0 2.000000000000000d0 42.000000000000000d0)
(Note #'grid doesn't take as arguments any options like the grid type, just elements, but there are other functions that will.) My reasoning was mainly to be consistent with CL, but also somehow a reader macro triggering an evaluation seemed wrong; a function was the most appropriate way to do that. I can't recall right now the problem that caused me to switch (before, it did do evaluation), but it was a mess and this was the easiest way to clean it up.
This evaluation problem as per Liam's description is common in statistical computation (and numerical computation, from Marco's example, but I'm more familiar with the statistical domain, ie computations in R) when it is possible that you'd like to hold off on an evaluation as long as possible, since you might not need it.
(of course, if you are doing memoization and under a reasonable range of circumstances, then you might as well compute as long as you aren't blocking the interactive activity for too long).
I think that I agree in general with Liam's approach, of making it explicit. But the problem, as Marco is pointing out, is that often it is just convenient and follows the principle of least surprise if it is just evaluated. I'd argue that this principle holds for only a few cultures of data analysts, and that maintaining it is potentially at high cost, in the sense that it doesn't force the analyst to think a bit about what they are doing and what the cost (computation, data centrality, etc) is. The payout (negative) doesn't occur too often, but when it does, it bites hard...
best, -tony
blindglobe@gmail.com Muttenz, Switzerland. "Commit early,commit often, and commit in a repository from which we can easily roll-back your mistakes" (AJR, 4Jan05).
Drink Coffee: Do stupid things faster with more energy!