Just reuse the collected_failures collection now that it contains
TestRun objects. Move the code to generate the short form of the log
to TestRun.
Note that the first line of the error log is not included in
get_log()'s return value, so the magic "first four lines are passed
unscathed" is changed to three lines only. The resulting output is
like this:
--- command ---
<command line>
--- Listing only the last 100 lines from a long log. ---
--- stdout ---
...
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If there's an UnicodeEncodeError while printing the error logs,
TestHarness tries an encode/decode pair to get rid of anything that
is not a 7-bit ASCII character; this however results in "?" characters
that are not very clear. To make it easier to understand what is
going on, use backslashreplace instead.
While at it, fix the decode to use a matching encoding. This will
only matter in the rare case of sys.stdout.encoding not being an
ASCII superset, but that should not matter.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of colorizing the whole status line, only colorize the word
representing the outcome of the test (SKIP, OK, FAIL, etc.). This
is less intrusive, so the patch also does the following changes:
- colorize OK and EXPECTEDFAIL, respectively as green and yellow
- colorize the summary of failures as well.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of storing the string, store the whole TestRun. In the
next patches we'll use this to colorize the summary of failures,
and to allow a few more simplifications.
There is some code duplication between the console and logfile
code, but it won't matter as soon as console and logfile output
will be in two completely separate classes.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Avoid passing them around as parameters; this will be useful when logging
is moved out of TestHarness, because individual loggers will call back
into TestHarness to do common formatting chores.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Place in TestRun everything that is needed in order to
format the result. This avoids passing around the number
and visible test name as arguments.
Test numbers are assigned the first time they are used.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This will provide a way to pass more information from the TestHarness
local variables to the SingleTestRunner and use them outside the
run_test function. For example, the name could be used to report
progress while the tests are running.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It is a usual workflow to fix something and retest to see if it is fixed using a
particular test. When tests start to become numerous, it becomes time consuming
for "meson test" to relink all of them (and in fact rebuild the whole project)
where the user has already specified the tests they want to run, as well as
the tests' dependencies.
Teach meson to be smart and only build what is needed for the test (or suite)
that were specified.
Fixes: #7473
Related: #7830
Avoid calling self.collected_failures.append twice, and avoid
inflated indentation by adding a "plain" decorator to mlog.
Fixes: ba71fde18 ("mtest: collect failures regardless of colorized console", 2020-10-12)
Rewrite the SingleTestRunner to use asyncio to manage subprocesses,
while still using subprocess.Popen to run them. Concurrency is
managed with an asyncio Semaphore; for simplicity (since this is
a temporary state) we create a new thread for each test that is run
instead of having a pool.
This already provides the main advantage of asyncio, which is better
control on cancellation; with the current code, KeyboardInterrupt
was never handled by the thread executor so the code that tried to handle
it in SingleTestRunner only worked for non-parallel tests. And
because executor futures cannot be cancelled, there was no way for
the user to kill a test that got stuck. Instead, without executors
^C exits "meson test" immediately. The next patch will improve things
even further, allowing a single test to be interrupted with ^C.
Distinguish a failure due to user interrupt from a presumable ERROR
result due to the SIGTERM. The test should fail after CTRL+C even if
the test traps SIGTERM and exits with a return code of 0.
ProcessLookupError can also happen from p.kill(). There is also
nothing we can do in that case, so move the "try" for that
exception to the entire kill_process function.
The ValueError case seems like dead code, so get rid of it.
A large part of _run_cmd is devoted to setting up and killing the
test subprocess. Move that to a separate function to make the
test runner logic easier to understand.
Use asyncio futures for the run loop, while still handling I/O in
a thread pool using run_on_executor.
The handling of the test result is not duplicated anymore between
run_tests and drain_futures. Instead, the test result is always processed
and printed by run_test after single_test.run() completes and (in verbose
mode) it cannot interleave with the test output. Therefore the special
case for self.options.num_processes == 1 can be removed.
run_special and doit are the same except that run_special forgot to
set self.is_run. There is no need for the duplication.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Fix gtest invoking while workdir is set
* Fix gtest invoking when workdir is not set
* Code style fix
Co-authored-by: Sergey Kartashev <kartashev.sv@mipt.ru>
You could always specify a list of tests to run by passing the names as
arguments to `meson test`. If there were multiple tests with that name (in the
same project or different subprojects), all of them would be run. Now you can:
1. Run all tests with the specified name from a specific subproject: `meson test subprojname:testname`
1. Run all tests defined in a specific subproject: `meson test subprojectname:`
Also forbid ':' in test names. We already forbid this elsewhere, so
should not be a big deal.
This allows the NINJA environment variable to support all the Windows special
cases, especially allowing an absolute path without extension.
Based on a patch by Yonggang Luo.
Fixes: #7659
Suggested-by: Nirbheek Chauhan <nirbheek@centricular.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This removes the check for "mingw" for platform.system(). The only case I know
where "mingw" is return is if using a msys Python under a msys2 mingw environment.
This combination is not really supported by meson and will result in weird errors,
so remove the check.
The second change is checking sys.platform for cygwin instead of platform.system().
The former is document to return "cygwin", while the latter is not and just
returns uname().
While under Cygwin it uname() always starts with "cygwin" it's not hardcoded in MSYS2
and starts with the environment name. Using sys.platform is safer here.
Fixes#7552
According to the specification:
https://testanything.org/tap-specification.html#skipping-tests
The harness should report the text after # SKIP\S*\s+ as a reason for
skipping.
(it's not exactly like the TODO directive, the phrasing/presentation of
the spec could be improved).
* mtest: TestResult.SKIP is not a failure
If some but not all tests in a run were skipped, then the overall result
is given by whether there were any failures among the non-skipped tests.
Resolves: https://github.com/mesonbuild/meson/issues/7515
Signed-off-by: Simon McVittie <smcv@debian.org>
* Add test-cases for partially skipped TAP tests
issue7515.txt is the output of one of the real TAP tests in gjs, which
failed as a result of #7515. The version inline in meson.build is
a minimal reproducer.
Signed-off-by: Simon McVittie <smcv@debian.org>
D lang compilers have an option -release (or similar) which turns off
asserts, contracts, and other runtime type checking. This patch wires
that up to the b_ndebug flag.
Fixes#7082
Gtest can output junit results with a command line switch. We can parse
this to get more detailed results than the returncode, and put those in
our own Junit output. We basically just throw away the top level
'testsuites' object, then fixup the names of the tests, and shove that
into our junit.
JUnit is pretty ubiquitous, lots of services and results viewers
understand it, in particular gitlab and jenkins know how to consume
JUnit xml. This means projects using CI services can have their test
results consumed automatically.
Fixes: #6972
Remove some weirdness from test output such as extra commas, missing
spaces and way too precise time durations. Also improve the overall
alignment of the output.