In commit 89146e84c9 we added some
complicated code to verify the llvm framework's "combination" matrix
lookup. It expects to find llvm with both cmake and config-tool, with
the same version. But the sanity check is wonky -- it checks that both
have the same found status, instead, so if both are not found then we
proceed to try to convert the string "unknown" to a mapping of semver
integers, and this is guaranteed to fail.
This can happen for example if the system llvm exists in the general
case, but actual modules cannot be found because the system llvm does
not distribute static modules. For example, this is the case on Gentoo.
Abort more obviously by just insisting that both be found. If they
aren't both found, then investigative efforts know to look at why they
weren't found.
- Include BaseNode position in hash methods, integer is the most
straightforward way of differentiating nodes.
- Exclude non hashable fields from hash method.
- Avoid using default values in BaseNode that way subclasses can have
fields wihtout default value without repeating init=False.
- Nodes that does not add fields does not need `@dataclass`.
- Make all node types hashable because they can be used for feature_key
in FeatureCheckBase.use().
- Remove unused type annotations
This is a pretty common pattern in python (the standard library uses it
a ton): A class is created, with a single private instance in the
module, and then it's methods are exposed as public API. This removes
the need for the global statement, and is generally a little easier to
reason about thanks to encapsulation.
We actually do not and should not care about wrap-mode at all for this.
We want to cache dependency lookups whenever humanly possible, but only
use them in cases where we would anyways be using them -- which in
particular means if we said to force a subproject fallback for this dep,
we want to bypass the cache.
Currently, we handle this by always looking up the cache for all
dependencies, but clearing the cache at startup if a reconfigure means
we are changing our resolution strategy. This is bad -- we might have
many dependencies that are worth caching, and only one dependency that
should stop being cached and use a subproject instead.
The simple solution is to handle the forcefallback case when doing a
cache lookup, and not do a cache lookup at all. Now we don't have to
nuke the entire cache. In fact, if a future reconfigure changes the
forcefallback state back to not being forced, we can reuse the original
cached dependency, which is still there.
Closes#11828
ninja's configured command for regenerating a build directory on any
action that *requires* reconfiguring, specifies the source and build
directories as they were known during initial project generation. This
means that if the build directory is no longer the *same* build
directory, we will regenerate... the original location, rather than the
location we want.
After that, ninja notices that build.ninja is still out of date, so it
goes and reconfigures again. And again. And again.
This is probably broken intentions, but endless reconfigure loops are a
kind of evil beyond all evils. There are no valid options here
whatsoever other than:
- doing what the user actually meant
- spawning a clear error message describing why meson refuses to work,
then exiting with a fatal error
But it turns out that it's actually pretty easy to do what the user
actually meant, and reconfigure the current build directory instead of
the original one. This permanently breaks the link between the two.
Fixes#6131
In this case, PEP 668 was created to allow a thing that Debian wanted,
which is for `pip install foobar` to not break the system python. This
despite the fact that the system python is fine, unless you use sudo pip
which is discouraged for separate reasons, and it is in fact quite
natural to install additional packages to the user site-packages.
It isn't even the job of the operating system to decide whether the user
site-packages is broken, whether the operating system gets the answer
correct or not -- it is the job of the operating system to decide
whether the operating system is broken, and that can be solved by e.g.
enforcing a shebang policy for distribution-packaged software, which
distros like Fedora do, and mandating not only that python shebangs do
not contain `/usr/bin/env`, but that they *do* contain -s.
Anyway, this entire kerfuffle is mostly just a bit of pointless
interactive churn, but it bites pretty hard for our use case, which is a
container image which is fortunately tested before deployment, so
instead of failing to deploy because of theoretical conflicts with the
base system (we specifically need base system integration...) we fail to
deploy because 5 minutes into pulling apt updates at the very beginning,
pip refuses point-blank to work. I especially do not know why it is the
job of the operating system to throw errors intended for interactive
users at people baking "appliance" containers who cannot "break" the
system python anyway.
Fix this by doing what Debian and Ubuntu should both have done from the
beginning, and opting containers out of this questionable feature
entirely.
Note that CI images may still not actually complete their build/test
cycle and be updated, because e.g. LLVM 16 issues tracked by #11642 or
glib ASAN issues tracked by #11754.
This issue was encounetered while working on a contribution to nixpkgs.
Nix allows the store to be installed on a separate, case-sensitive APFS
volume. When the store is on a case-sensitive volume, these tests fail
because they try to use `foundation` instead of `Foundation`.
This can be the longest part of the entire test process, often with no
benefit because we already know the environment is sane. So, let's have
an option save some time.
Allow the use of wildcards (e.g. *) to match test names in `meson test`.
Raise an error is given test name does not match any test.
Optimize the search by looping through the list of tests only once.
- Do not hardcode terminal width of 100 chars, that breaks rendering on
smaller terminal. It already uses current console width by default.
- Disable progress bar when downloading from msubprojects because it
fetches multiple wraps in parallel.
- Scale unit when downloading e.g. MB/s.
- Do not display rate when it's not a download.
- Do not display time elapsed to simplify the rendering.
We silently dropped all integer values to install_mode since the
original implementation of doing this in KwargInfo, in commit
596c8d4af5.
This happened because install_mode is supposed to convert False
(exactly) to None, and otherwise pass all arguments in place. But a
generator is homogeneous and attempting to do this correctly produced a
mypy error that FileMode arguments were allowed to be ints -- well of
course they are -- so that resulted in the convertor... treating ints
like False instead, to make mypy happy.
Fixes#11538
This convertor was initially implemented doing all the things the TODO
says it doesn't yet do. The freestanding interpreter function is what
doesn't do this.
This option can never have a value of None. There are only two sources
of values at all:
- the class instance initializer when defining BUILTIN_CORE_OPTIONS
- user-provided command-line or machine file values etc. which can only
be meson types
We know we don't construct the Option instance with None, and users
cannot pass a None anywhere since that's not a meson type. The only
reason this was ever checked for was as an artifact during the initial
implementation of the option in commit 8651d55c6a.
At the time, a review comment was made that `-Dinstall_umask=none` was a
bad UX and "preserve" should be used instead. Before that, this option
type accepted `None` (in the BUILTIN_CORE_OPTIONS initializer) and
`'none'` (provided by users) which is odd and should have consistently
been the latter. Then inside set_value, it checked for the magic
initializer value and converted it to the real value.
After review comments and a force-push, the patch ended up using `None`
in the initializer, and `'preserve'` everywhere else, and still handling
both in set_value and converting both to a proper string.
In the very next commit in the patch series, the initializer was
migrated to use an actual umask of 022, and now `None` was entirely
impossible to get anywhere at all. But the wart of checking for it was
never removed. Remove it at long last.
I would like to use the same pattern rule as github actions uses:
'[0-9]+.[0-9]+'
But azure pipelines doesn't document what the syntax here is, and it
scares me that perhaps the reason we didn't already do this is because
it doesn't work at all.
Currently you can only use one of qt4, qt5, qt6 in a single project
when using a machine file because the config-tool lookup for qt only
looks at `qmake` in the machine files, instead of looking up the
binary names directly.
Allow specifying `qmake` `qmake4` `qmake5` and `qmake6`.
This is necessary for gstreamer, which can build separate qt5 and qt6
plugins that are distributed as static libraries, so the user can pick
which one to use.
The concept of merge_file intrinsically requires some GNU-specific
functionality, so let's emit a useful error message during
configuration, when we don't have that.
The relevant GNU gettext versions date back to around 2015 so *probably*
anyone has that too, but we may as well verify that while we are here.
There are a number of implementations for msgfmt, supporting various
options. The simplest, and most common, use case is to compile .po files
into .mo files, and this should be able to work on gettext
implementations other than the GNU one.
The problem is that we were passing some pretty portable arguments in an
unportable manner. The `-o` option-argument and its associated argument
came after the input file operand, which violates the POSIX Utility
Syntax Guidelines, and happens to not be supported by Solaris gettext.
The GNU gettext doesn't care; GNU invented GNU argument permutation.
Switch the order around so that our use respects the POSIX style.
- add `extra_paths` to intro-tests.json to know paths needed to run a
test on Windows;
- add `depends` to alias targets in intro-targets.json to know what
targets does an alias point to;
- add `depends` to intro-dependencies.json to know libraries linked with
an internal dependency;
- renamed `deps` to `dependencies` in `intro-dependencies.json` for more
uniformity.
* DubDependency._check_dub returns the version
* check for compatible Dub version
Dub versions starting at 1.32 have a new cache structure
into which Meson doesn't know where to find compatible artifacts
* skipping D tests involving Dub
* refactor _check_dub
makes mypy happier
* make linters happy
* localize some logic