Hello João,
I tried to run the slime test suite this morning to see what results it produced. I'm uncertain what the proper procedure is for running the test suite, but this is what I did. I made the test suite available by including "slime-tests" in my ".emacs" file in the (slime-setup '( < other contribs > slime-tests)), which seemed to work.
I started emacs, did M-x slime, then did M-x slime-batch-test, which ran the test suite after asking me if I wanted to start a second inferior lisp (I tried it with and without). The results came out with 17 unexpected failures and I have attached a file "slime-batch-test-results.txt" which contains the contents of the *ert* buffer from emacs.
For reference, I'm running Linux Mint 14 with KDE-4.9.5 and sbcl-1.1.14 and emacs-23.4.1.
Am I using the proper procedure to run the slime test suite? In the past when I ran the test suite, it generally finished with only one/(1) expected failure and no/(0) unexpected failures. Should I expect more failures in the slime test suite with the github version of slime?
Thanks,
Paul Bowyer
Paul Bowyer pbowyer@olynet.com writes:
Hello João,
I tried to run the slime test suite this morning to see what results it produced. I'm uncertain what the proper procedure is for running the test suite, but this is what I did. I made the test suite available by including "slime-tests" in my ".emacs" file in the (slime-setup '( < other contribs > slime-tests)), which seemed to work.
The way to run the tests *in the current emacs session* is by adding slime-tests.el's dir to the load path (which is now by default) and using.
(require 'slime-tests)
So you got lucky cause slime-setup does this, tho doing that doesn't really make sense, since they're not a contrib.
Anyway, I prefer to run the tests non-interactively from the command line, and then debug individual tests inside an interactive session.
To do so, go into your slime dir and type
$ make clean check
This will clean any garbage, recompile the emacs lisp files and check just slime "core" (i.e no contribs). "make check-fancy" will check just the slime-fancy meta-contrib (i.e. it's sub-contribs), and "make check-foo" will check the "foo" contrib when it is run standalone.
"slime-batch-test-results.txt" which contains the contents of the
I'll have a look, Luis is probably right that most of the failures are autodoc failures in the latests slime. The autodoc tests are somewhat brittle.
João
On Fri, Feb 7, 2014 at 3:01 PM, Paul Bowyer pbowyer@olynet.com wrote:
For reference, I'm running Linux Mint 14 with KDE-4.9.5 and sbcl-1.1.14 and emacs-23.4.1.
FWIW, the autodoc failures are related to recent SBCL versions. Not sure about the other ones.
On 02/08/2014 08:42 AM, Luís Oliveira wrote:
On Fri, Feb 7, 2014 at 3:01 PM, Paul Bowyer pbowyer@olynet.com wrote:
For reference, I'm running Linux Mint 14 with KDE-4.9.5 and sbcl-1.1.14 and emacs-23.4.1.
FWIW, the autodoc failures are related to recent SBCL versions. Not sure about the other ones.
Hello Luís and João,
Thanks for the answers. I wasn't certain what I was running into with all of the errors, but now I see that running the test suite from within emacs is not the best plan. I tried the method(s) suggested by João and that worked well and gave me results that had no unexpected failures.
Paul Bowyer