an int only accepts operations on other ints, just like other primitive
types only accept operations on values of the same type.
But due to using isinstance in baseobjects "operator_call", an int
primitive allowed operations on a bool, even though reversing the
operator and having a bool perform operations on an int, would fail with
a type error.
Really, we should fail with a type error in both directions. But for
stability reasons, make this a loud warning and break --fatal-meson-warnings
builds.
This is useful for totally terrible stuff that we really dislike, but
for some reason we are afraid to just use `mlog.deprecation()` and
unconditionally tell people so.
Apparently this is because it is totally absolutely vital that, when
telling people something is so broken they should never ever ever use it
no matter what, ever... we can't actually tell them that unless they
bump the minimum version of Meson, because that's our standard way of
introducing a **version number** to tell them when we first started
warning about this.
Sigh. We really want to warn people if they are doing totally broken
stuff no matter what version of Meson they support, because it's not
like fixing the thing that never worked is going to suddenly break old
versions of meson.
So. Here's some new functionality that always warns you, but also tells
you when we started warning.
The version output scraping for identifying strings checked for "IFORT"
in parentheses after the executable name, which is probably a mistake by
Intel. Current versions of ifx have "IFX" in parentheses there.
Detect both.
Fixes#11873
When a project targets a dev version of Meson (e.g. 1.1.99) for
experimenting, this allows to use:
project(..., meson_version: '>=1.2.0')
It avoids getting warnings when using FeatureNew for features introduced
in 1.2.0.
Specifically, when those files can be created by a build rule with one
version of meson.build, and created as e.g. a shared_library alias
symlink in another version of meson.build, the cleandead command will
delete important files just because they don't happen to be created by
ninja itself.
Work around this by making a dummy rule that exists solely to insert the
files into the build graph to trick ninja into not deleting them.
Closes#11861
If py2 is not found *and* the compiler is MSVC, we didn't take into
account that py2 is not found. This also meant that we didn't take into
account the expected count when it *is* found, because the python module
has a better finder than just "is the binary on PATH".
In commit 89146e84c9 we added some
complicated code to verify the llvm framework's "combination" matrix
lookup. It expects to find llvm with both cmake and config-tool, with
the same version. But the sanity check is wonky -- it checks that both
have the same found status, instead, so if both are not found then we
proceed to try to convert the string "unknown" to a mapping of semver
integers, and this is guaranteed to fail.
This can happen for example if the system llvm exists in the general
case, but actual modules cannot be found because the system llvm does
not distribute static modules. For example, this is the case on Gentoo.
Abort more obviously by just insisting that both be found. If they
aren't both found, then investigative efforts know to look at why they
weren't found.
- Include BaseNode position in hash methods, integer is the most
straightforward way of differentiating nodes.
- Exclude non hashable fields from hash method.
- Avoid using default values in BaseNode that way subclasses can have
fields wihtout default value without repeating init=False.
- Nodes that does not add fields does not need `@dataclass`.
- Make all node types hashable because they can be used for feature_key
in FeatureCheckBase.use().
- Remove unused type annotations
This is a pretty common pattern in python (the standard library uses it
a ton): A class is created, with a single private instance in the
module, and then it's methods are exposed as public API. This removes
the need for the global statement, and is generally a little easier to
reason about thanks to encapsulation.
We actually do not and should not care about wrap-mode at all for this.
We want to cache dependency lookups whenever humanly possible, but only
use them in cases where we would anyways be using them -- which in
particular means if we said to force a subproject fallback for this dep,
we want to bypass the cache.
Currently, we handle this by always looking up the cache for all
dependencies, but clearing the cache at startup if a reconfigure means
we are changing our resolution strategy. This is bad -- we might have
many dependencies that are worth caching, and only one dependency that
should stop being cached and use a subproject instead.
The simple solution is to handle the forcefallback case when doing a
cache lookup, and not do a cache lookup at all. Now we don't have to
nuke the entire cache. In fact, if a future reconfigure changes the
forcefallback state back to not being forced, we can reuse the original
cached dependency, which is still there.
Closes#11828
ninja's configured command for regenerating a build directory on any
action that *requires* reconfiguring, specifies the source and build
directories as they were known during initial project generation. This
means that if the build directory is no longer the *same* build
directory, we will regenerate... the original location, rather than the
location we want.
After that, ninja notices that build.ninja is still out of date, so it
goes and reconfigures again. And again. And again.
This is probably broken intentions, but endless reconfigure loops are a
kind of evil beyond all evils. There are no valid options here
whatsoever other than:
- doing what the user actually meant
- spawning a clear error message describing why meson refuses to work,
then exiting with a fatal error
But it turns out that it's actually pretty easy to do what the user
actually meant, and reconfigure the current build directory instead of
the original one. This permanently breaks the link between the two.
Fixes#6131
In this case, PEP 668 was created to allow a thing that Debian wanted,
which is for `pip install foobar` to not break the system python. This
despite the fact that the system python is fine, unless you use sudo pip
which is discouraged for separate reasons, and it is in fact quite
natural to install additional packages to the user site-packages.
It isn't even the job of the operating system to decide whether the user
site-packages is broken, whether the operating system gets the answer
correct or not -- it is the job of the operating system to decide
whether the operating system is broken, and that can be solved by e.g.
enforcing a shebang policy for distribution-packaged software, which
distros like Fedora do, and mandating not only that python shebangs do
not contain `/usr/bin/env`, but that they *do* contain -s.
Anyway, this entire kerfuffle is mostly just a bit of pointless
interactive churn, but it bites pretty hard for our use case, which is a
container image which is fortunately tested before deployment, so
instead of failing to deploy because of theoretical conflicts with the
base system (we specifically need base system integration...) we fail to
deploy because 5 minutes into pulling apt updates at the very beginning,
pip refuses point-blank to work. I especially do not know why it is the
job of the operating system to throw errors intended for interactive
users at people baking "appliance" containers who cannot "break" the
system python anyway.
Fix this by doing what Debian and Ubuntu should both have done from the
beginning, and opting containers out of this questionable feature
entirely.
Note that CI images may still not actually complete their build/test
cycle and be updated, because e.g. LLVM 16 issues tracked by #11642 or
glib ASAN issues tracked by #11754.
This issue was encounetered while working on a contribution to nixpkgs.
Nix allows the store to be installed on a separate, case-sensitive APFS
volume. When the store is on a case-sensitive volume, these tests fail
because they try to use `foundation` instead of `Foundation`.