On Thu, Dec 29, 2011 at 5:46 PM, Gail Zacharias
<gz@clozure.com> wrote:
Using declarations vs using THE is often a stylistic consideration, and while you may be able to get ECL-only users to accept your additional semantics, you might have trouble getting maintainers of portable libraries to observe this arbitrary distinction.
Precisely what I mean is that the current semantics is really inconvenient for library writers. I also believe that this change can be introduced at no cost for library maintainers because it effectively does not change the semantics at the safety levels that code are typically compiled (0 or default ones). Let me try to explain it further below.
Why not let SPEED into the mix? E.g. if SPEED > SAFETY then don't compile typechecks.
The issue is not SPEED, it is safety. Safety need not be sacrificed to gain speed. Moreover, the problem with this SAFETY vs SPEED thing is that it has no granularity at all. It is a simplistic view of the world which assumes that all code is the same.
Let me explain the situation with an ordinary library, say a regular expression parser. Somebody who writes the library has to understand that there are various types of routines or sections of code that she is going to write:
2- Code that handles user input (strings, lists which might be malformed, etc)
1- Code that handles internal data (structures that will not change, sealed classes, lists of known lengths)
0- Small sections of code that handles internal data and needs speed
I would expect that only 0 should be compiled with SAFETY = 0, and explicitly marked so. However, we also have 1 and 2, which typically 1 and 2 are going to coexist and sometimes appear intermixed in the same function. Here one must either resort to high safety levels for everything, or end up wrapping around different sections of code with (LOCALLY (... UNSAFE ...) ...) declarations. This is not good in my opinion.
The problem is that we are implicitly advocating that SAFETY = 0 is good for everything once the code is mature enough and you need speed, but such level implies much more than believing type declarations, it typically implies that the arguments to functions are not checked at all. Take (CAR (THE CONS X)). There are multiple ways in which this CAR call can be inlined. To get the optimal case in this situation where I am telling the compiler "believe me, this is a CONS", I may be opening the can of worms by lifting all type checks in other uses of CAR.
Why do I believe this does not really change the semantics in a significant way? First of all because apart from SBCL's declaration policy there is not an explicitly written commitment in any of the free (natively compiling) common lisps out there about the meaning of optimization settings. In such a panorama, I would guess that currently library maintainers more or less follow the approach of lowering safety levels to 0 in speed-critical code and leaving it at some default value that works with their favorite implementation elsewhere. See for instance CL-PPCRE
(defvar *standard-optimize-settings*
'(optimize speed (safety 0)(space 0) (debug 1) (compilation-speed 0) #+:lispworks (hcl:fixnum-safety 0))...
From the user's point of view, the approach seems to be: if safety level is zero, the compiler will make fast code, in default settings mode, I will get type checking. The PCL also suggests this, and it seems to be a common entry point for many new users. Moreover, users also cannot rely on CMUCL's or SBCL's or ECL's type checking behavior for function arguments, because they are not really standard, and manual type checking is required in most libraries.
OTOH, if one comes up with a set of sensible settings that users may choose from and which may be applicable throughout the library, without disrupting the current behavior at SAFETY 0 or default, then the cost of adoption is zero.
Cheers,
Juanjo
--
Instituto de Física Fundamental, CSIC
c/ Serrano, 113b, Madrid 28006 (Spain)
http://juanjose.garciaripoll.googlepages.com