attached is a patch which allows slime to track, list and recompile top level forms which have been edited since the last compilation.
I've added a brief description to the manual, here's the executive summary:
When the, customizable, variable slime-record-changed-definitions is non-nil we install before/after edit hooks which keep a list of all the toplevel forms that have been modified and their locations.
The command slime-compile-changed-definitions is run, not bound to any key, we just recompile all the changed forms.
The command slime-list-changed-definitions pops up a simplistic buffer with a list of all the forms which would be cempiled by a call to slime-compile-changed-definitions and the order in which they'd be compiled.
comments welcome. i'll apply this shortly unless it breaks slimes someone's slimewhere slime-record-changed-definitions is NIL (iow i'm using this email to make sure the patch doesn't breax anything, i don't expect it to be 100% bug free or perfect from a UI stand point).
Helmut Eller heller@common-lisp.net writes:
- Marco Baringer [2007-04-08 21:47+0200] writes:
comments welcome.
This is something that the compiler should do, not the editor.
How can the compiler know what we've edited and what we haven't?
Or are you just saying that we should be using the compiler and not pxref to compute the compile order?
(And we, well I, don't want to turn SLIME into some Eclipse-like monster).
so we'll be fighting in a few months when i propose adding refactoring tools to slime? :)
* Marco Baringer [2007-04-09 13:08+0200] writes:
This is something that the compiler should do, not the editor.
How can the compiler know what we've edited and what we haven't?
So called "incremental" compilers know that. That's pretty common in the Java world. I think the GNU Ada compiler also has support for incremental compilation. The Fun'O Dylan and the SML/NJ compiler too. Those compilers usually keep the AST, compilation environment, or some fingerprints of the previous compilation in memory or save it to a DB.
Or are you just saying that we should be using the compiler and not pxref to compute the compile order?
I'm saying that the compiler (or related program transformer) is better suited to implement this feature than the editor. I'm pretty sure that Franz' ELI does this stuff on the Lisp side.
As far as I can see, this feature tries to work around slow compilers. Why not fix the compiler instead?
(And we, well I, don't want to turn SLIME into some Eclipse-like monster).
so we'll be fighting in a few months when i propose adding refactoring tools to slime? :)
Surely.
Helmut.
Helmut Eller heller@common-lisp.net writes:
- Marco Baringer [2007-04-09 13:08+0200] writes:
This is something that the compiler should do, not the editor.
How can the compiler know what we've edited and what we haven't?
So called "incremental" compilers know that. That's pretty common in the Java world. I think the GNU Ada compiler also has support for incremental compilation. The Fun'O Dylan and the SML/NJ compiler too. Those compilers usually keep the AST, compilation environment, or some fingerprints of the previous compilation in memory or save it to a DB.
i'm sorry, but i fail to see how that is possible without having the editor (be it emacs or something embedded in the compiler) tell the compiler what text has been written/edited and what hasn't.
or do you mean we should save all the files and then tell the compiler to re-read them all and infer what needs to be done? if that's what you're saying then i'd still like this feature if just for the emacs buffer listing the changes before the source code (which may or not be valid) is read/compiled.
Or are you just saying that we should be using the compiler and not pxref to compute the compile order?
I'm saying that the compiler (or related program transformer) is better suited to implement this feature than the editor. I'm pretty sure that Franz' ELI does this stuff on the Lisp side.
the emacs side of eli keeps track of what files have been modified and passes the original and the new version to allegro who'd then do a smart diff. what i'm suggesting here is a slightly smarter version of that, why bother diff'ing if we can learn what's different as the changes get made?
As far as I can see, this feature tries to work around slow compilers. Why not fix the compiler instead?
this feature serves two purposes:
1) make emacs keep track of all the code i've edited since the last compile. even without actually being able to compile the code i find the list-changed-forms buffer very useful. often i'll be working on a single piece of functionality which is speard across multiple files (one file has the db class definition, another the corresponding xml-api implementation and a third has the core code itself) it's a pain to remeber all the pieces i've touched. maybe i've saved the buffers, so i can't just use buffer-modified-p, maybe i haven't saved the buffers, so 'svn stat' doesn't help, maybe i've edited the documentation and i don't care about those files. in this case it really is nice to ask emacs for a list, in a single buffer, of all the lisp forms i've been working on rencently.
being able to order and compile these forms is really just icing on the cake. i could just as well walk down the buffer and hit c on each form to compile them manually in whatever order i want.
2) work around projects where build times are 20 minutes or more. you can only speed up the compiler so far, some projects are just big and there's nothing you can do about it.
now, having said all that there's no way, given an arbitrary set of changed lisp forms and an arbitrary set of existing definitions in the image, to properly infer what order they need to be compiled in without compiling and executing them. this last step is going to error if you get the order wrong, since you have to this to get the ordering right you're screwed in the general case. in practice the 80% solution is helpful, but i'll drop if it seems like bloat.
Marco, Helmut,
One of the nice things about Eclipse (at least in principle) is the whole plug-in architecture thing. I wonder if slime could benefit from such a beast and if you (or anyone else) has given it any thought?
FWIW, I like the idea of more source analysis, etc. in my editor AND am sensitive to concerns of bloat, etc... it's a conundrum.
cheers, -- Gary Warren King, metabang.com Cell: (413) 885 9127 Fax: (206) 338-4052 gwkkwg on Skype * garethsan on AIM
On Mon, Apr 09, 2007 at 01:23:41PM -0400, Gary King wrote:
One of the nice things about Eclipse (at least in principle) is the whole plug-in architecture thing. I wonder if slime could benefit from such a beast and if you (or anyone else) has given it any thought?
FWIW, I like the idea of more source analysis, etc. in my editor AND am sensitive to concerns of bloat, etc... it's a conundrum.
As I've watched fuzzy-completion grow in both features and size, I've wondered about something like SBCL's contrib system for Slime. For those who don't know about this, it contains modules that usually tie in somewhat tightly to SBCL's internals (so they benefit from being maintained in the same source tree), but don't require being built into each and every SBCL core. An example contrib is SB-INTROSPECT, which provides the [mostly] stable interface Swank uses to poke at SBCL internals. Something like this for Slime could be beneficial. I would nominate the fuzzy-completion feature as the first thing to use it, as it is quite a lot of code for a completion engine.
On the other hand, the current Slime/Swank combination has the advantage that you can just simply load Swank in a remote Lisp and expect it to work, as every feature is compiled in. The system I described above would either require configuration on both sides as to what contribs are required, or some protocol to load remote contrib libraries on demand.
Note that I haven't bothered to spend the time on this myself. :)
This doesn't imply the kind of (I presume) strict plug-in interface that Eclipse would have, but I'm not convinced you need or want that in a dynamic language.
-bcd
* Brian Downing [2007-04-09 19:40+0200] writes:
As I've watched fuzzy-completion grow in both features and size, I've wondered about something like SBCL's contrib system for Slime. For those who don't know about this, it contains modules that usually tie in somewhat tightly to SBCL's internals (so they benefit from being maintained in the same source tree), but don't require being built into each and every SBCL core. An example contrib is SB-INTROSPECT, which provides the [mostly] stable interface Swank uses to poke at SBCL internals. Something like this for Slime could be beneficial. I would nominate the fuzzy-completion feature as the first thing to use it, as it is quite a lot of code for a completion engine.
Maybe we should create a new module in the CVS repository, say slime-extras, for non-essential or not-mature features. That would be very useful to make such code publicly available. I'm not sure whether CVS is a good tool for such purposes. Anyway, the core of Slime should be/become reasonably small and robust.
I'd say the following qualifies as non-essential: - fuzzy completion - presentations - class/xref browser - complete-form - stuff that tries to be to smart
It's not trivial to setup the infrastructure for this purpose. Some obvious problems are:
1. how and when to load extra code 2. how to write backend specific code 3. how to write documentation 4. what to do when the core changes in fundamental ways
Usually I'd say that it's not worth to bother and we should just drop the extras entirely, but it's quite tiring to tell every second guy that SLIME is more than feature complete and that I don't want to add new stuff.
So, what do other people think of such an approach.
Helmut.
On Mon, Apr 09, 2007 at 09:34:22PM +0200, Helmut Eller wrote:
Maybe we should create a new module in the CVS repository, say slime-extras, for non-essential or not-mature features. That would be very useful to make such code publicly available. I'm not sure whether CVS is a good tool for such purposes. Anyway, the core of Slime should be/become reasonably small and robust.
I'd rather see it stay within the same repository, for two main reasons. One, it reduces the user workload to get extra features, as they only need to check out once. Two (and perhaps more importantly), it makes extras easy to find as a relative path from slime.el and swank.lisp -- if it was a separate CVS module, it would both have to be checked out separately and either put in a particular place or configured per-user. This problem is doubly compounded when using Slime and Swank on different hosts (as double checkout and configuration would have to happen twice).
However, I'd strongly recommend each extra module having its own ChangeLog, and to keep module change noise out of the Slime ChangeLog.
My two cents (with my pathetic opinion credit of four commits or so :) says that that is probably a good compromise between theoretical clenliness and user workload.
-bcd
On Mon, Apr 09, 2007 at 02:57:26PM -0500, Brian Downing wrote:
I'd rather see it stay within the same repository
~~~~~~~~~~
Sorry, I was using "repository" in the distributed-vcs sense here. I probably should have said, "I'd rather see extras stay within the Slime CVS module..."
-bcd
Maybe we should create a new module in the CVS repository, say slime-extras, for non-essential or not-mature features. That would be very useful to make such code publicly available. I'm not sure whether CVS is a good tool for such purposes. Anyway, the core of Slime should be/become reasonably small and robust.
I'd say the following qualifies as non-essential:
- fuzzy completion
- presentations
- class/xref browser
- complete-form
- stuff that tries to be to smart
I think splitting SLIME into multiple modules is generally a good idea. On the other hand I also think these SLIME "extras" are really essentials. I would also put paredit on that list and make that stuff installed by default for newbies.
I have seen newbies coming from the C/C++/Java world (and I'm not talking about bad hackers) having trouble with the default set of installed features of SLIME. For example not having paredit by default was one of those.
Actually I'm using Eclipse and the Java tool chain for half the working time while I'm using SLIME, emacs and SBCL for the other half and I believe I have experience with both of them and I think there are pros and cons on both sides. Just to mention one: renaming a function in CL code still requires regexp search whilc in Eclipse this is painfully simple.
levy
"Levente Mészáros" levente.meszaros@gmail.com writes:
I have seen newbies coming from the C/C++/Java world (and I'm not talking about bad hackers) having trouble with the default set of installed features of SLIME. For example not having paredit by default was one of those.
While I'm not necessarily against distributing paredit along side, I question that they have trouble because of missing paredit, as editing by structure is quite radically different to what they're probably used to.
Just to mention one: renaming a function in CL code still requires regexp search whilc in Eclipse this is painfully simple.
No, it doesn't necessarily require regexp searched. Look at `C-c <' and the submenu entry SLIME -> Cross Reference.
-T.
Just to mention one: renaming a function in CL code still requires regexp search whilc in Eclipse this is painfully simple.
No, it doesn't necessarily require regexp searched. Look at `C-c <' and the submenu entry SLIME -> Cross Reference.
I'm not sure if you meant that I shall list all references to the given function and I should manually edit all the listed files from that list?
It might be less time consuming and error prone as running a regexp search and replace but hey it's still very far from what eclipse does with renaming.
levy
Maybe we should create a new module in the CVS repository, say slime-extras, for non-essential or not-mature features. That would be very useful to make such code publicly available. I'm not sure whether CVS is a good tool for such purposes. Anyway, the core of Slime should be/become reasonably small and robust.
I'd say the following qualifies as non-essential:
- fuzzy completion
- presentations
- class/xref browser
- complete-form
- stuff that tries to be to smart
This is a wonderful idea. If the slime core were pared down feature-wise to something more like slime from late-2004 (or less), there'd be a nice stable, reliable base system. As it is, I never update anymore, because every time I do, something new is broken thanks to another dwimmy extension. If these were factored out into modules, I could avoid the ones I dislike, load the unstable ones when I feel like playing with them, and help improve/solidify the modules I find especially important (xref, scratch-style interaction).
I think it would be a lower barrier to hacking if the extras lived in a subdirectory or subdirectories of the main slime module in CVS, but I don't feel especially strongly about it.
It's not trivial to setup the infrastructure for this purpose. Some obvious problems are:
- how and when to load extra code
This is a bit problematic. I'd do it by having slime-setup on the emacs side take a list of extras to load; change swank-loader.lisp to define a function that loads swank instead of actually loading it, giving the emacs side (or the human) a chance to pass the list of extras to match Emacs.
- how to write backend specific code
- how to write documentation
Why not just leave that up to each extra module, but with a preference of just duplicating the infrastructure used by Swank?
- what to do when the core changes in fundamental ways
All the extras break, until someone cares enough to fix them. That's how things work now, too, except the extras are all loaded into the core. But I doubt the core needs to fundamentally change very often, especially if it's factored off on its own.
Usually I'd say that it's not worth to bother and we should just drop the extras entirely, but it's quite tiring to tell every second guy that SLIME is more than feature complete and that I don't want to add new stuff.
So, what do other people think of such an approach.
You mean burning out presentations with a hot iron like a cancer? I'm in favor of any way to get a core Slime, and if you personally have the will to take one or the other approach, I'd be in favor. The hot iron approach is likely to lead to a fork, which I think would be less detrimental to long-term reliability than the status quo, but more so than having a core plus extras.
* Marco Baringer [2007-04-09 18:06+0200] writes:
i'm sorry, but i fail to see how that is possible without having the editor (be it emacs or something embedded in the compiler) tell the compiler what text has been written/edited and what hasn't.
or do you mean we should save all the files and then tell the compiler to re-read them all and infer what needs to be done? if that's what you're saying then i'd still like this feature if just for the emacs buffer listing the changes before the source code (which may or not be valid) is read/compiled.
Yes, I'm saying the compiler, not SLIME, should find changes and figure out what needs to be compiled.
the emacs side of eli keeps track of what files have been modified and passes the original and the new version to allegro who'd then do a smart diff. what i'm suggesting here is a slightly smarter version of that, why bother diff'ing if we can learn what's different as the changes get made?
Because it's more precise and needs less complicated code in Emacs. Why re-implement half of the compiler in Emacs?
As far as I can see, this feature tries to work around slow compilers. Why not fix the compiler instead?
this feature serves two purposes:
make emacs keep track of all the code i've edited since the last compile. even without actually being able to compile the code i find the list-changed-forms buffer very useful. often i'll be working on a single piece of functionality which is speard across multiple files (one file has the db class definition, another the corresponding xml-api implementation and a third has the core code itself) it's a pain to remeber all the pieces i've touched. maybe i've saved the buffers, so i can't just use buffer-modified-p, maybe i haven't saved the buffers, so 'svn stat' doesn't help, maybe i've edited the documentation and i don't care about those files. in this case it really is nice to ask emacs for a list, in a single buffer, of all the lisp forms i've been working on rencently.
being able to order and compile these forms is really just icing on the cake. i could just as well walk down the buffer and hit c on each form to compile them manually in whatever order i want.
That wouldn't work for me because I press C-x C-s and C-c C-c every few seconds anyway.
- work around projects where build times are 20 minutes or more. you can only speed up the compiler so far, some projects are just big and there's nothing you can do about it.
That's what a incremental compiler is for: small changes to big programs can be compiled instantaneously.
now, having said all that there's no way, given an arbitrary set of changed lisp forms and an arbitrary set of existing definitions in the image, to properly infer what order they need to be compiled in without compiling and executing them. this last step is going to error if you get the order wrong, since you have to this to get the ordering right you're screwed in the general case.
That's true, but getting the order right on a per file basis instead on a per form basis doesn't sound very difficult. Most of the time, the order of the files is defined by some kind of Makefile/asdf definition. Instead of tracking forms in the editor, you could give the compiler an (ordered) list of files to watch. If one file changes the compiler could figure out which form changed and recompile only the necessary parts. (This would also allow proper error messages for those parts which haven't changed but are affected by the changes.) Writing such a compiler isn't easy, but adding those feature to the editor is IMO the wrong place.
in practice the 80% solution is helpful, but i'll drop if it seems like bloat.
I think this is bloat. We should instead encourage Lisp implementors to write smarter compilers with proper dependency tracking. Or write a tool to properly analyze CL source code, but I doubt that you want to do that in Emacs Lisp.
A "changed definition" feature like this was discussed here before and at that time we concluded that C-c C-k is fast enough for most purposes.
Helmut.
Helmut Eller wrote:
Because it's more precise and needs less complicated code in Emacs. Why re-implement half of the compiler in Emacs?
Speaking as an SBCL developer: it makes sense for SBCL to track XREF information, and it /might/ make even sense to support "recompile definitions that depend on FOO", but I don't see any sense in SBCL scanning files and trying to figure out which bits have changed. That is the key bit of information that is naturally in the editor. Some other SBCL developer may feel differently.
Cheers,
-- Nikodemus
* Nikodemus Siivola [2007-04-09 22:01+0200] writes:
Helmut Eller wrote:
Because it's more precise and needs less complicated code in Emacs. Why re-implement half of the compiler in Emacs?
Speaking as an SBCL developer: it makes sense for SBCL to track XREF information, and it /might/ make even sense to support "recompile definitions that depend on FOO", but I don't see any sense in SBCL scanning files and trying to figure out which bits have changed.
You don't need to generate code for those parts which haven't changed. Isn't that very interesting information? Wasn't there some hack from Andreas Fuchs which used macroexpand hooks to find the changed bits? It didn't use the editor.
That is the key bit of information that is naturally in the editor.
I don't know about Vi but many editors can't be programmed as easily as Emacs. For those editors it's much easier if the hard work is done by some external tool.
Helmut.
Helmut Eller wrote:
- Nikodemus Siivola [2007-04-09 22:01+0200] writes:
You don't need to generate code for those parts which haven't changed. Isn't that very interesting information? Wasn't there some hack from Andreas Fuchs which used macroexpand hooks to find the changed bits? It didn't use the editor.
Andreas' hack was a source-level dependecy groveler, not a change finder. (At least I am not aware of any change-finder by him or anyone else.) It does essentially what XREF does.
To find changes the compiler would either need to save a private copy of the original source, which is not a trivial expense for large systems.
That is the key bit of information that is naturally in the editor.
I don't know about Vi but many editors can't be programmed as easily as Emacs. For those editors it's much easier if the hard work is done by some external tool.
True, but editor is still much closer to the information "which definitions have changed" then the compiler, and if the files have not been saved editor is the only place where that information can be.
At any rate, I think moving stuff to contribs/plugins is a swell idea.
Cheers,
-- Nikodemus
With the speed of modern processors, I have a hard time believing that full recompilation of a few modified files is speed issue.
I have a Lisp application consisting of approximately 100K lines (3.7MB) source code which can be completely recompiled in:
35 secs with Allegro Common Lisp 7.0 60 secs with CMUCL 19d
(AMD Athlon(tm) 64 X2 Dual Core Processor 4600+)
Typically when developing code, individual functions are incrementally recompiled instantly, and the full recompilation is not needed.
Unless macro or struct definitions that are used by many files are being changed frequently, there is rarely a need for full recompilation.
Helmut Eller wrote:
That wouldn't work for me because I press C-x C-s and C-c C-c every few seconds anyway.
- work around projects where build times are 20 minutes or more. you can only speed up the compiler so far, some projects are just big and there's nothing you can do about it.
That's what a incremental compiler is for: small changes to big programs can be compiled instantaneously.
now, having said all that there's no way, given an arbitrary set of changed lisp forms and an arbitrary set of existing definitions in the image, to properly infer what order they need to be compiled in without compiling and executing them. this last step is going to error if you get the order wrong, since you have to this to get the ordering right you're screwed in the general case.
That's true, but getting the order right on a per file basis instead on a per form basis doesn't sound very difficult. Most of the time, the order of the files is defined by some kind of Makefile/asdf definition. Instead of tracking forms in the editor, you could give the compiler an (ordered) list of files to watch. If one file changes the compiler could figure out which form changed and recompile only the necessary parts. (This would also allow proper error messages for those parts which haven't changed but are affected by the changes.) Writing such a compiler isn't easy, but adding those feature to the editor is IMO the wrong place.
On Mon, 9 Apr 2007 21:18:12 -0700, Lynn Quam quam@ai.sri.com wrote:
With the speed of modern processors, I have a hard time believing that full recompilation of a few modified files is speed issue.
I have a Lisp application consisting of approximately 100K lines (3.7MB) source code which can be completely recompiled in:
35 secs with Allegro Common Lisp 7.0 60 secs with CMUCL 19d (AMD Athlon(tm) 64 X2 Dual Core Processor 4600+)
Good for you, but Marco and I are both working on a project where compiling the whole system takes more than ten minutes on recent 64-bit machines (using SBCL).