GENERATED files can be used as dependencies for other targets, so it's
misguided (at best) to filter them with a blunt whitelist.
However, there does exist an extension that needs to be skipped: on Windows +
MSVC, CMake will by default try to generate a Visual Studio project, and
there dependencies with no inputs are instead tied to a dummy .rule
input file which is created by the generation step. The fileapi will
still report those, so it will cause Meson to bail out when it realises
there's no such file in the build tree.
Fixes#11607
This replaces all of the Apache blurbs at the start of each file with an
`# SPDX-License-Identifier: Apache-2.0` string. It also fixes existing
uses to be consistent in capitalization, and to be placed above any
copyright notices.
This removes nearly 3000 lines of boilerplate from the project (only
python files), which no developer cares to look at.
SPDX is in common use, particularly in the Linux kernel, and is the
recommended format for Meson's own `project(license: )` field
This properly sets the project version in projects meson generates from
cmake projects. This allows dependency fallbacks to properly check the
version constraints in dependency calls when falling back to a cmake
subproject. Before this would fail, because the project version was
undefined.
Performed using https://github.com/ilevkivskyi/com2ann
This has no actual effect on the codebase as type checkers (still)
support both and negligible effect on runtime performance since
__future__ annotations ameliorates that. Technically, the bytecode would
be bigger for non function-local annotations, of which we have many
either way.
So if it doesn't really matter, why do a large-scale refactor? Simple:
because people keep wanting to, but it's getting nickle-and-dimed. If
we're going to do this we might as well do it consistently in one shot,
using tooling that guarantees repeatability and correctness.
Repeat with:
```
com2ann mesonbuild/
```
It's shorter and more descriptive. Although we always enforce the same
rules either way, a unified decorator is one less line of code for each
location, and also tells you how many "too few" arguments you *did*
pass.
Which adds the `use-set-for-membership` check. It's generally faster in
python to use a set with the `in` keyword, because it's a hash check
instead of a linear walk, this is especially true with strings, where
it's actually O(n^2), one loop over the container, and an inner loop of
the strings (as string comparison works by checking that `a[n] == b[n]`,
in a loop).
Also, I'm tired of complaining about this in reviews, let the tools do
it for me :)
This finds uses of deny-listed functions, which defaults to map and
filter. These functions should be replaced by comprehensions in
idiomatic python because:
1. comprehensions are more heavily optimized and are often faster
2. They avoid the need for lambdas in some cases, which make them
faster
3. you can do the equivalent in one statement rather than two, which
is faster
4. They're easier to read
5. if you need a concrete instance (ie, a list) then you don't have
to convert the iterator to a list afterwards
Add a MissingCompiler class returned by compiler detecting methods
intead of None - accessing such an object raises a DependencyException
Fixes#10586
Co-authored-by: duckflyer <duckflyer@gmail.com>
flake8 6 upgrades to pyflakes 3, and in turn this means that support for
parsing `# type: ` style annotations has been removed.
https://github.com/PyCQA/pyflakes/pull/684
This caused one file to fail linting, because it had a typing import
which was only used by a type comment.
```
mesonbuild/cmake/interpreter.py:55:5: F401 '.common.CMakeConfiguration' imported but unused
```
Updating it to actual annotations allows pyflakes to detect its usage
again, and flake8 passes. Do the whole file while we are here.
This catches some optimization problems, mostly in the use of `all()`
and `any()`. Basically writing `any([x == 5 for x in f])` vs `any(x == 5
for x in f)` reduces the performance because the entire concrete list
must first be created, then iterated over, while in the second f is
iterated and checked element by element.
In various situations we want to figure out what type of compiler we
have, because we want to know stuff like "is it the pgi one", or "does
it use msvc style". The compiler object has this property already, via
an API specifically designed to communicate this info, but instead we
performed isinstance checks on a compiler class.
This is confusing and indirect, and has the side effect of requiring
more imports everywhere. We should do away with it.