The subunit protocol is a report format for unit test results. The aim is to make it easier to use different unit testing implementations (perhaps in different languages) but use a single output format and have common tools for generating statistics, controlling the unit tests, etc.
This page is currently even far from being a draft. It contains a few notes but needs to be extended and reworked into a proper spec. -- jelmer
stderr can be used for reporting comments.
stdout is used both for printing comments and for reporting status information.
Lines on stdout that can be successfully parsed using the protocol should be considered status information. All other lines should be considered
The program needs to announce that a test is being started. This is done by writing a single line to stdout in the following format:
NAME needs to be a unique identifier specific to a particular test.
NAME can be any UTF-8 valid string. A new line ends the test name.
Q: Can multiple tests be running simultaneously? If so, what test would the comments apply to ?
The result of a test is reported in a similar fashion as announcing a test.
If the test succeeded, the following line is written to stdout:
If one of the checks in the test failed:
Optionally, a description of the error that occurred can be reported by adding an opening bracket ([) as the last character on the line. The following lines can then contain a description of the error until a line containing a single closing bracket (]) is encountered.
failure: NAME [ DESCR1 DESCR2 ... ]
If something unexpected happened when running the test, an error is reported. For example, if the test segfaulted. The format for an error is similar to that of a failure, except that the keyword 'error' is used:
error: NAME [ DESCR1 DESCR2 ... ]
If the test couldn't be run because of missing dependencies, it is reported that the test has been skipped. The format is similar to that of 'failure', except that the keyword 'skip' is used. An example reason for a skipped test would be a missing library that is required for the test to run.
If the test couldn't be run (and can't be fixed by the user in the current environment), a similar line as 'failure' should be printed, but with the keyword 'skip'.
tests marked 'notsupported' are not counted as failed, as opposed to the previous 3 test results.
FIXME: what about 'knownfailure' ? I think knowing what tests are expected to fail is a task of the test environment, so I don't think there should be a command like that.
Should be optional.
- - I have chosen to eliminate the long list of alternatives that the python
- implementation accepts for commands (succes, suces, ok, etc) so that it's easier for different implementations to comply to a standard.