Luke Gorrie wrote:
Dirk Gerrits dirk@dirkgerrits.com writes:
I'm not sure if this is appropriate for this mailing list, but I was wondering if and how people are using SLIME to test their programs. Looking at the SLIME manual, there is nothing that really jumps out at me as a program testing tool.
If you haven't already then you might like to look at: [snipped]
These are certainly interesting, but I wasn't really asking what testing framework I should use. (We have http://www.cliki.net/Test%20Framework for that. ;))
My problem is how to use any such testing framework /effectively/. The style of writing a bit of code in the REPL, trying it out, and copying it to my file, editing it a bit, trying it out in the REPL again... is comfortable for me.
When writing tests, you have to compare the results you get with the results you expect, but sometimes I'm not so sure about the latter. The REPL-approach lets me actually get the result, and then argue whether it is correct or not. I suppose I could then copy this result, make it into a test-case in my file, and stop whining on this list, but somehow this is already too much for me. (Talk about commitment to testing eh? ;))
Perhaps I should look into SLIME's internals and hack up a command to make the last REPL evaluation into a test case. Possibly with some hooks to make it work for any given testing framework.
I always go ad-hoc myself because, frankly, I don't take testing as seriously as I should :-).
Well it's not really our fault, Common Lisp makes ad-hoc testing so damn easy. :) It's not as if C++ has a REPL. (Well I guess there's http://home.mweb.co.za/sd/sdonovan/underc.html but let's not digress into such madness.)
In SLIME we have a homebrew elisp-driven testing framework on `M-x slime-run-tests' that presents its results in an outline-mode buffer.
Nice. Something like the compilation notes buffer, but for test results. That might be nice to have for programs created /with/ SLIME. Hmm, more hacking to do. :)
I have a CL program for encoding and decoding TCP/IP packets that tests itself. I did that ad-hoc, but now I think about that might be a good application for ClickCheck. http://www.hexapodia.net/pipermail/small-cl-src/2004-July/000030.html
Yes ClickCheck seems like a very nice idea. I'm just a bit worried that by using a different random seed each time you may be 'losing test cases', that may 'resurface' at any future moment, at which it may be harder to know what caused the error. But then, it might be a test case that you wouldn't have written using a non-random approach... And with the same random seed, you're just getting a static set of test cases, that may not include the critical cases, which you'd then have to add yourself... Let's say that I'm intrigued but skeptical. :)
Regards,
Dirk Gerrits