On Wed, 11 Jan 2012 09:01:19 +0400, Anton Vodonosov said:
On 01/10/2012 04:28 AM, Luís Oliveira wrote:
I may have not been consistent in my usage of marking expected failures, but they mark known bugs unlikely to be fixed in the short-term. Either in CFFI or in the Lisp implementation. ... In terms of notifications, I would rather be warned about new failures. In terms of a summary, I'd like to see the results broken down into OK, FAIL, KNOWNFAIL.
10.01.2012, 23:12, "Jeff Cunningham" jeffrey@jkcunningham.com:
How about OK, FAIL, UNEXPECTEDOK, and EXPECTEDFAIL? You have to consider the the cases where one expects a failure but it passes too.
I think it is rather theoretical. If no test frameworks provide a notion of UNEXPECTEDOK, this means it was not needed in regression testing practice.
I am even reluctant to the EXPECTEDFAIL, because the word is contradictory and the meaning is not obvious and confusing.
If take into account that test results are observed not only by developers, but also by library users, we can imagine a user seeing EXPECTEDFAIL and asking himself: "Excpected FAIL... Is it OK? Can I use the library?"
But I see that several regression testing frameworks provide a notion of expected failures and developers use it.
And now I understand the goal - to simplify detection of _new_ regressions.
Therefore I think I will introduce an expected failure status in the cl-test-grid (in the near feature).
FWIW, our internal test harness uses the term "known" rather than "expected" for this situation.
The per-release/per-platform list of known failures is kept separate from the tests, which allows the test harness to report success as long as the set of known failures matches.
__Martin