* Madhu [2008-05-04 02:30+0200] writes:
Working with a large array (say 510 elements) is the most simple example which springs to mind. Imagine you want to inspect the last 10 objects, by going forwards and backwards in the inspector buffer.. If you cannot customize the variable you are forced to hit MORE for every element > 500.
Again, large datasets pose a problem with SLIME. Here mechanism to deal with it was in place.
Are you saying that a customizable variable solves the problems with large datasets?
Alan Ruttenberg's code truncated the content after a certain length (if the limit was set and only for some data-structures, like arrays and hash-tables [and only if the backend didn't provide it's own specialized methods for those data-structures]). Currently, the content is always truncated after a limit (the limit is hard-coded but works for all data-structures). To me, this looks like the same mechanism.
|> As in the case of slime history behaviour, the concern here is that |> existing useful functionality has again been removed in the [false name] |> of simplifying things, and replaced with something significantly worse. | | Not every feature is it worth have, but simplicity is not optional. | Feel free to disagree.
No, I don't disagree, the principle is admirable. My claim is its application is wrong. I'll take this case as a point. The feature was removed based on the assumption that "most people don't work with large datasets", so it was not worth having. and by removing the feature we are able to simplify things.
You assume that that was my assumption, but your assumption is wrong. My primary goal was to make the mechanism work for objects of all types and not only for some with special inspect methods. This should make inspect methods simpler (and it did).
I removed the customizable variable because a) I think the limit should be customizable on the Emacs side and not on the Lisp side b) what I wanted was a mechanism that only fetches the parts around point when moving backward/forward. b) was essentially not possible with Alan's code, it's possible now, but not (yet) implemented.
Now if the assumption turns out to be wrong, and some people actually need to work with large datasets, and they require that funcionality, Now to get the functionality back, MORE COMPLEXITY has to be added against the simplified codebase.
The old code had bugs and Alan, who originally wrote the code, or those people who introduced the bugs, didn't fix them for a long time. Little is lost if we implement it from scratch.
This is usually an indicator that the simplification is misguided and it because it does not judge the value of feature or the tradeoff accurately.
Well, it's hard to make accurate judgements without making wrong judgements from time to time.
Helmut.