* mtest: fix TAP with --verbose
TAP needs to process the test stdout even if --verbose is passed.
Capture it to a separate temporary file, and print it at the end
of the test if --verbose was passed.
In the future, we could parse it on the fly and print the result of
each TAP test point in verbose mode.
* Prefer "stderr is stdout" to "=="
The previous commit used "==" in accordance with the preexisting code,
but reviewers preferred using "is" instead. Fix both occurrences.
This provides an initial support for parsing TAP output. It detects failures
and skipped tests without relying on exit code, as well as early termination
of the test due to an error or a crash.
For now, subtests are not recorded in the TestRun object. However, because the
TAP output goes on stdout, it is printed by --print-errorlogs when a test does
not behave as expected. Handling subtests as TestRuns, and serializing them
to JSON, can be added later.
The parser was written specifically for Meson, and comes with its own
test suite.
Fixes#2923.
Hard errors also come from the GNU Automake test protocol. They happen when
e.g., the set-up of a test case scenario fails, or when some
other unexpected or highly undesirable condition is encountered.
TAP will use them for parse errors too. Add them to the exitcode protocol
first.
--print-errorlogs is using the test's return code to look for failed
tests, instead of just looking at the TestResult. Simplify the code and
make it work for TAP too.
If the global gdb option of mesontest is disabled (e.g. not set '--gdb')
and the gdb option of test_setup is enabled, an exception will be thrown.
Because signal.signal function can only be called from the main thread.
If attempting to call it from other threads will cause a ValueError exception to be raised.
When python sees an invalid character in a filename for the current locale,
instead of clobbering it, it saves is as an invalid codepoint called a
surrogate. We need to explicitly instruct the encoder to write those out
as-is. In the JSON file, we replace them instead to produce valid json.
$ flake8
./mesonbuild/mtest.py:524:9: E122 continuation line missing indentation or outdented
per PEP8, this line requires more indentation to distinguish it from the
following line
This makes it clear in the results that tests marked "should_fail"
exist. We also avoid the all caps output and make the classifications
unambigous compared to pytest or autotools' XFAIL/XPASS.
Before:
OK: 329
FAIL: 1
SKIP: 0
TIMEOUT: 0
After:
Ok: 323
Expected Fail: 1
Fail: 6
Unexpected Pass: 0
Skipped: 0
Timeout: 0
as instructed in the python docs, you should not use PIPE here. This can
lead to deadlocks, with massive testsuite output. Which was the case for efl.
For now the output of the tests is redirected into the a temp file, the
content from there can then be used to fill the TestRun structure.
This fixes test running problems in efl.
This has the adventage that "meson --help" shows a list of all commands,
making them discoverable. This also reduce the manual parsing of
arguments to the strict minimum needed for backward compatibility.
We used to immediately try to use whatever exe_wrapper was defined in
the cross file, but some people generate the cross file once and use
it for several projects, most of which do not even need an exe wrapper
to build.
Now we're a bit more resilient. We quietly fall back to using
non-exe-wrapper paths for compiler checks and skip the sanity check.
However, if some code needs the exe wrapper, f.ex., if you run a built
executable using custom_target() or run_target(), we will error out
during setup.
Tests will, of course, continue to error out when you run them if the
exe wrapper was not found. We don't want people's tests to silently
"pass" (aka skip) because of a bad CI setup.
Closes https://github.com/mesonbuild/meson/issues/3562
This commit also adds a test for the behaviour of exe_wrapper in these
cases, and refactors the unit tests a bit for it.
We already have code to fetch and find binaries specified in a cross
file, so use the same code for exe_wrapper. This allows us to handle
the same corner-cases that were fixed for other cross binaries.
When a test fails due to a signal (e.g., SIGSEGV) it can be somewhat
mysterious why the test failed. Also, even when a test fails due to a
non-zero exit status it would help if the exit status was reported. This
augments the result string to include the non-zero exit status or
signal number and name.
Resolves#3642
When the exe runner is `wine` or `wine32` or `wine64`, etc.
This allows people to run tests with wine.
Note that you also have to set WINEPATH to point to your custom
prefix(es) if your tests use external dependencies.
Closes https://github.com/mesonbuild/meson/issues/3620
Replace the logic where a test setup with no project specifier defaults to
the main project with one that takes the test setup from the same
(sub)project from where the to-be-executed test has been read from.
Use $project_name:$test_setup namespace scheme for test setups. This
allows one to choose from which (sub)project a test setup is taken from
should there be several sharing the same name. Defaults to the main
project. E.g. "meson test --setup subproj:valgrind".
Setting MALLOC_PERTURB_ to a non-zero value is fine for regular test
cases. It helps catching bugs, but also comes with some runtime
overhead.
This overhead is noticeable for benchmarks when compared to running them
directly instead of through Meason.
Therefore, MALLOC_PERTURB_ is not touched for benchmarks.
closes#3034
According to Python documentation[1] dirname and basename
are defined as follows:
os.path.dirname() = os.path.split()[0]
os.path.basename() = os.path.split()[1]
For the purpose of better readability split() is replaced
by appropriate function if only one part of returned tuple
is used.
[1]: https://docs.python.org/3/library/os.path.html#os.path.split
When `ninja -C builddir/ test` is run, ninja will change into the build
dir before starting, but `meson test -C builddir/` does not. This is
important because meson does not use (for good reasons) absolute paths,
which means if a test case needs to be passed as an argument a file name
that is part of the build process, it will be relative builddir. Without
changing into the builddir the path will not exist (or worse, point at
the wrong thing), and test will not behave as intended.
To fix this mtest will change directory before starting tests, and will
change back after all tests have been finished.
Fixes#2710