Avoid calling self.collected_failures.append twice, and avoid
inflated indentation by adding a "plain" decorator to mlog.
Fixes: ba71fde18 ("mtest: collect failures regardless of colorized console", 2020-10-12)
It turns out my first attempt to fix this in 00d5ef3191 ("Fix
clang-tidy return value reporting (#7949)") is not sufficient: The
local variable returncode is never updated and stays at 0. This fixes
the returncode calculation.
Fixes: cce172432b ("Use run-clang-tidy when available.")
See https://gitlab.gnome.org/GNOME/glib/-/issues/600
`volatile` was previously mistakenly used in GLib to indicate that a
variable was accessed atomically or otherwise multi-threaded. It’s not
meant for that, and up to date compilers (like gcc-11) will rightly warn
about it.
Drop the `volatile` qualifiers.
Based on a patch by Jeff Law.
See also http://isvolatileusefulwiththreads.in/c/.
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
These do not go into the command line, and therefore do not matter
for the purpose of avoiding unnecessary rebuilds after meson is
rerun. However, they complicate the task of finding differences
between build lines across meson reruns.
So take the easy way out and sort everything after | and ||.
With this change, there is absolutely no change in QEMU's 40000-line
build.ninja file after meson is rerun.
The order of keys in dictionaries cannot be relied upon, because the hash
values are randomized by Python. Whenever we iterate on dictionaries and
meson.build generates a list during the iteration, the different iteration
orders may cause random changes in the command line and cause ninja to
rebuild a lot of files unnecessarily.
The order of elements in sets cannot be relied upon, because the hash
values are randomized by Python. Whenever sets are converted to lists
we need to keep their order stable, or random changes in the command line
cause ninja to rebuild a lot of files unnecessarily. To stabilize them,
use either sort or OrderedSet. Sorting is not always applicable, but it
can be faster because it's done in C and it can produce slightly nicer
output.
Rewrite the SingleTestRunner to use asyncio to manage subprocesses,
while still using subprocess.Popen to run them. Concurrency is
managed with an asyncio Semaphore; for simplicity (since this is
a temporary state) we create a new thread for each test that is run
instead of having a pool.
This already provides the main advantage of asyncio, which is better
control on cancellation; with the current code, KeyboardInterrupt
was never handled by the thread executor so the code that tried to handle
it in SingleTestRunner only worked for non-parallel tests. And
because executor futures cannot be cancelled, there was no way for
the user to kill a test that got stuck. Instead, without executors
^C exits "meson test" immediately. The next patch will improve things
even further, allowing a single test to be interrupted with ^C.
Distinguish a failure due to user interrupt from a presumable ERROR
result due to the SIGTERM. The test should fail after CTRL+C even if
the test traps SIGTERM and exits with a return code of 0.
ProcessLookupError can also happen from p.kill(). There is also
nothing we can do in that case, so move the "try" for that
exception to the entire kill_process function.
The ValueError case seems like dead code, so get rid of it.
A large part of _run_cmd is devoted to setting up and killing the
test subprocess. Move that to a separate function to make the
test runner logic easier to understand.
Use asyncio futures for the run loop, while still handling I/O in
a thread pool using run_on_executor.
The handling of the test result is not duplicated anymore between
run_tests and drain_futures. Instead, the test result is always processed
and printed by run_test after single_test.run() completes and (in verbose
mode) it cannot interleave with the test output. Therefore the special
case for self.options.num_processes == 1 can be removed.
run_special and doit are the same except that run_special forgot to
set self.is_run. There is no need for the duplication.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
So that editors that can fold code (vim, vscode, etc) can correctly fold
functions, instead of getting confused by code that doesn't follow the
current indention. Also, it makes the code easier to read.
There are two bugs here, first is that we open coded the output args,
instead of using the compiler method. The second is that rust args are
not passed down to the backend invocation.
rustc is very different than other compilers, in that it doesn't
generate object files, it just creates a final target out of the
intermediate sources. As such, it needs to know about the linker args in
the compiler invocation.
The only way to add it via warning_level is top opt in to -Wextra too.
But this is often not really desirable, since -Wextra is not stable --
it changes its meaning every compiler release, thus mysteriously adding
new warnings. Furthermore, it's not really the same kind of warning -- a
pedantic warning is always correct that your code is wrong, but defines
wrongness as "not per the portable standard". Unlike -Wextra it doesn't
try to judge your code to see if you're doing something that is "often
not what you meant", but is prone to false positives.
Really, we need different *kinds* of warning levels, possibly as an
array -- not just a monotonically increasing number. But since there's
currently nothing flexible enough to specify -Wpedantic without -Wextra,
we will just remove the warning for the former, and let people add it to
their project arguments in peace.
Since the current approach of demoting to the nearest C standard *might*
work, but might not. For projects like Glib that detect which standard
is used and fall back this is fine. For projects like libdrm that only
work with gnu standards, this wont. We're nog tusing a warning because
this shouldn't be fatal if --meson-fatal-warnings is used. Also demote a
similar message in IntelCl from warning to log.
And then update the choices in each leaf class. This way we don't end up
with another case where we implicitly allow an invalid standard to be
set on a compiler that doesn't have a 'std' setting currently.
As far as I can Tell, rust just handles this for us (it's always worked
with no special arguments from us). However, since we're going to add
support for base options for rust, we need to add the method.
When TemporaryDirectory() cleans up on __exit__ it sometimes throws
OSError noting that the dir isn't empty. This happens after the
first yield in this generator and leads to the exception being handled
which leads to a second yield.
contextlib.contextmanager() fails then since the function it wraps is only
allowed to yield once.
Fix this by not yielding again in the error case.
Fixes#7947
* Fix clang-tidy return value reporting
In case clang-tidy is invoked manually, i.e. if run-clang-tidy(.py) is
not found, Meson would not report the return value. This is caused by
ignoring the return value of manual_clangformat() in clangformat()
within mesonbuild/scripts/clangtidy.py.
Even though only more recent-versions of clang-tidy actually report an
non-zero exit code if errors are found, there is no reason Meson
shouldn't simply report any error codes it received from clang-tidy.
Fixes#7948.
* Rename methods in clangtidy.py from clangformat to clangtidy
For some unknown reason, the method names in clangtidy.py are clangformat()
and manual_clangformat(). This is confusing, as clang-format is not
invoked by them, clang-tidy is. Hence rename those from
{manual_}clangformat() → {manual_}clangtidy()
Also, remove the possibility of passing in a compiler instance to
min_driver_version. This is because the NVCC compiler instance is,
as of CUDA Toolkit 11.0, no longer guaranteed to be versioned
identically to the toolkit itself.
Using the std option, so now `rust_std=..` will work. I've chosen to use
"std" even though rust calls these "editions", as meson refers to
language versions as "standards", which makes meson feel more uniform,
and be less surprising.
Fixes: #5100