From 87c2a9b5a50250a051ed83560fd2a9bd6bb915a9 Mon Sep 17 00:00:00 2001 From: Renato Alves Date: Wed, 21 Jan 2015 12:13:19 +0000 Subject: [PATCH] Unittest - README clarifications * Add one line command to run all tests * Clarify that only tests ending with .t and execute bit set are run --- test/README | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/test/README b/test/README index c268ead89..8008f45aa 100644 --- a/test/README +++ b/test/README @@ -8,10 +8,17 @@ test suite. Running Tests ------------- -All unit tests produce TAP output, and are run by the 'run_all' test harness. +TL;DR cd test && make && ./run_all && ./problems + +All unit tests produce TAP (Test Anything Protocol) output, and are run by the +'run_all' test harness. + The 'run_all' script produces an 'all.log' file which is the accumulated output -of all tests. The script 'problems' will list all the tests that fail, with a -count of the failing tests. +of all tests. Before executing 'run_all' you need to compile the C++ unit +tests, by running 'make' on the 'test' directory. + +The script 'problems' will list all the tests that fail, with a count of the +failing tests. Any TAP harness may be used. @@ -33,10 +40,12 @@ There are three varieties of tests: to perform various high level tests. All tests are named with the pattern '*.t', and any other forms are not run by -the test harness. This allows us to rename tests (foo.t --> foo.x) to ensure -that they are not run. Sometimes tests are submitted for bugs that are not -scheduled to be fixed in the upcoming release, and we don't want the failing -tests to prevent us from seeing 100% pass rate for the bugs we *have* fixed. +the test harness. Additionally a test must be set executable (chmod +x) for it +to be run. In the case of Perl and Python tests one can still run them manually +by launching them with 'perl/python test.t'. It also allows us to keep tests +submitted for bugs that are not scheduled to be fixed in the upcoming release, +and we don't want the failing tests to prevent us from seeing 100% pass rate +for the bugs we *have* fixed. Goals