On Fri, Dec 26, 2014 at 10:10 PM, Steve Haflich <shaflich@gmail.com> wrote:
On Fri, Dec 26, 2014 at 11:38 AM, Kenneth Tilton <ken@tiltontec.com> wrote:
> Why is there no way to remove an interned EQL specializer meta-object?

Because no way to do this was defined in the MOP.  It's unclear
whether you suggest there should be some programmatic way to unintern
an EQL specializer, or whether the system should do it automatically.
But neither makes a lot of sense.

If a user call uninterned an EQL specializer while there were still
methods specialized upon it, then the MOP would become inconsistent.

> They get defined in the context of a method definition,

They are also interned by an explicit user-code call to
INTERN-EQL-SPECIALIZER, which would be a reasonably thing to do it
using the MOP directly to install new methods, or even to test whether
any method or gf exists specialized on that EQL object.

> so one just needs to do
> some good old-fashioned engineering: method-specializer reference tracking
> leveraged at method removal time to know when to toss the hash table entry.

(defparameter .kenny. (intern-eql-specializer 'tilton))

How would the MOP do reference counting on this metaobject.  If the
implementation spontaneously uninterned it, then a subsequent call to
i-e-s would return a different metaobject, in violation of the MOP
specification.

> I am more interested in why this is perceived as a problem, but if the OP is
> doing some dynamic metaprogramming I can imagine a use case.

Yes, indeed.  I expect this thread is a lot of owrrying about nothing
important.  But as I suggested previously, an implementation with weak
hash tables could unintern EQL specializers safely if it wanted to
bother.

_______________________________________________
pro mailing list
pro@common-lisp.net
http://mailman.common-lisp.net/cgi-bin/mailman/listinfo/pro

I am ready to concede that the issue here is minor and peripheral but it is not inexistent.

From Steve's mention of specializer-direct-methods and friend it is clear to me now that intern-eql-specializer is part of a specializer specific dependency tracking facility. And, looking again at the source code of PCL, I can see that facility used at least in the optimization of make-instance and compute-applicable-methods (probably as Scott suspected).  So, this is reasonable purpose for me and I consider my original (subject line) question properly answered.

Implementing intern-eql-specializer by means of a weak hash-table as suggested by Steve is exactly what CCL does, clisp uses something similar (weak sets I think), but not SBCL. But somehow I don't think it fair from MOP to require every implementation to use weak hash-tables in this case.

I am more of the opinion that the specification is incomplete in this area. The parallel with the situation of symbols in packages had also struck me and I think it should be pushed somewhat further with the addition of unintern-eql-specializer (granted this one is as dangerous as unintern is for symbols) and of something like map-eql-specializers (à la do-symbols). (BTW, in PCL you can see its optimization code use a map-specializers.) With these two it becomes possible to implement something like scrub-unused-eql-specializers if one wants to, without them it is simply impossible.

I think this is just good principled engineering being applied here. And, no Kenny, principled engineering, as I understand it, is not a religion but rather a philosophy, a subdivision of pragmatism I would say even if you don't find it pragmatic enough.

And the principles at work here would be:

1) Respect clearly and completely defined interfaces.

2) Inaccessible internal state is a very bad thing, avoid it.

I admit that 2) comes from my hardware design days (way back) but I think it also applies to software, pretty much for the same reasons it imposed itself on the hardware side as central to the "design for testability" methodology.

Thank you all again for your replies. They have been of great help.

Cheers,

JCB