We need to remember its value when reconfiguring, but the Build object
is not reused, only coredata is.
This also makes CLI more consistent by allowing `-Dvsenv=true` syntax.
Fixes: #11309
The current check results in *any* value to `export_dynamic` generating
vala import targets, even `false`. This is pretty clearly wrong, as it
really wants to treat an unset export_dynamic as false.
Generated objects can already be passed in the "objects" keyword argument
as long as you go through an extract_objects() indirection. Allow the
same even directly, since that is more intuitive than having to add them
to "sources".
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Hook this up to installed dependency manifests. This is often needed
above and beyond just an SPDX string -- e.g. many licenses have custom
copyright lines.
At least, if you tried to use it when passing an install_dir. Because
T.Sequence is horrible and we should never use it, and the annotations
are a lie that produces bugs.
So, fix the annotations on CustomTarget to never allow this to happen
again, and also fix the function too. Move some definitions elsewhere
inline to satisfy the linter.
Fixes#11157
There are two problems here: a typing problem, and an algorithm problem.
We expect it to always be passed to CustomTarget() as a list, but we ran
list() on it, which became horribly mangled if you violated the types
and passed a string instead. This caused weird*er* errors and didn't
even do anything. We want to do all validation in the interpreter,
anyway, and make the build level dumb.
Meanwhile we type it as accepting a T.Sequence, which technically
permits... a string, actually. This isn't intentional; the point of
using T.Sequence is out of a misguided idea that APIs are supposed to be
"technically correct" by allowing "anything that fulfills an interface",
which is a flawed concept because we aren't using interfaces here, and
also because "technically string fulfills the same interface as a list,
if we're talking sequences".
Basically:
- mypy is broken by design, because it typechecks "python", not "what we
wish python to be"
- we do not actually need to graciously permit passing tuples instead of
lists
As far as historic implementations of this logic go, we have formerly:
- originally, typeslistified anything
- switched to accepting list from the interpreter, redundantly ran list()
on the list we got, and mishandling API violations passing a string
(commit 11f9638035)
- switched to accepting anything, stringlistifying it if it was not
`None`, mishandling `[None]`, and invoking list(x) on a brand new list
from stringlistify (commit 157d438835)
- stopped stringlistify, just accept T.List[str | None] and re-cast to
list, violates typing because we use/handle plain None too
(commit a8521fef70)
- break typing by declaring we accept a simple string, which still
results in mishandling by converting 'foo' -> ['f', 'o', 'o']
(commit ac576530c4)
All of this. ALL of it. Is because we tried to be fancy and say we
accept T.Tuple; the only version of this logic that has ever worked
correctly is the original untyped do-all-validation-in-the-build-phase
typeslistified version.
Let's just call it what it is. We want a list | None, and we handle it too.
When auto-generating e.g. a `clang-format` target, we first check to see
if the user has already defined one, and if so we don't bother creating
our own. We check for two things:
- if a ninja target already exists, skip
- if a run_target was defined, skip
The second check is *obviously* a duplicate of the first check. But the
first check never actually worked, because all_outputs was only
generated *after* generating all utility rules and actually writing out
the build.ninja file. The check itself compares against nothing, and
always evaluates to false no matter what.
Fix this by reordering the target creation logic so we track outputs
immediately, but only error about them later. Now, we no longer need to
special-case run_target at all, so we can drop that whole logic from
build.py and interpreter.py, and simplify the tracked state.
Fixes defining an `alias_target()` for a utility, which tried to
auto-generate another rule and errored out. Also fixes doing the same
thing with a `custom_target()` although I cannot imagine why anyone
would want to produce an output file named `clang-format` (unless clang
itself decided to migrate to Meson, which would be cool but feels
unlikely).
Which adds the `use-set-for-membership` check. It's generally faster in
python to use a set with the `in` keyword, because it's a hash check
instead of a linear walk, this is especially true with strings, where
it's actually O(n^2), one loop over the container, and an inner loop of
the strings (as string comparison works by checking that `a[n] == b[n]`,
in a loop).
Also, I'm tired of complaining about this in reviews, let the tools do
it for me :)
We have divergent implementations of loading a pickled *.dat file. The
Build class loader has a better error message. But the generic loader
handles TypeError and ModuleNotFoundError. Merge the implementations,
and use it for Build as well.
Fixes#11051
This reverts commit c94c492089.
This broke propagated deps as well as PCH in MSVC, and has not been
fixed in time for the final release of 0.64.0, so it needs to be
reverted and then brought back later.
Fixes#10745Fixes#10975
This introduce a new type of BuildTarget: CompileTarget. From ninja
backend POV it is the same thing as any other build target, except that
it skips the final link step. It could be used in the future for
transpilers too.
Those classes are used by wrapper scripts and we should not have to
import the rest of mesonlib, build.py, and all their dependencies for
that.
This renames mesonlib/ directory to utils/ and add a mesonlib.py module
that imports everything from utils/ to not have to change `import
mesonlib` everywhere. It allows to import utils.core without importing
the rest of mesonlib.
What happens is this:
- liba is a convenience static library
- libb is an installed static library
- libb links in liba with --link-whole
- libc links to libb
- we generate a link line with libb *and* liba, even though libb is a
strict superset of liba
This is a bug that has existed since the we stopped using link-whole to
combine convenience libraries, and to instead propagate their
dependencies up. For most linkers this is harmless, if inefficient.
However, for apple's ld64 with the addition calling `ranlib -c`, this
ends up causing multiple copies of symbols to clash (I think that other
linkers recognize that these symbols are the same and combine them), and
linking to fail.
The fix is to stop adding libraries to a target's `link_whole_targets`
when we take its objects instead. This is an all around win since it
fixes this bug, shortens linker command lines, and avoids opening
archives that no new symbols will be found in anyway.
There are two distinct cases here that need to be considered. The first
issue is https://github.com/mesonbuild/meson/issues/10723 and
https://github.com/mesonbuild/meson/issues/10724, which means that Meson
can't actually generate link-whole arguments with rust targets. The
second is that rlibs are never valid candidates for link-whole anyway.
The promotion happens to work because of another bug in the promotion
path (which is fixed in the next commit).
Generally plumb through the values of get_option() passed to
install_dir, and use this to establish the install plan name. Fixes
several odd cases, such as:
- {datadir} being prepended to "share" or "include"
- dissociating custom install directories and writing them out as
{prefix}/share/foo or {prefix}/lib/python3.10/site-packages
This is the second half of #9478Fixes#10601
Do to a bug in which a method, rather than an attribute, was checked
we would never detect that a target had no sources, even though we meant
to. As a result we would never actually error out when a target has no
sources. When fixing the check, I've changed the error to a warning
so that existing projects will continue to build without errors, only
warnings.
In order to reliably link to static libraries or individual object
files, we need to take their languages into account as well. For static
libraries this is easy: we just add the static library's list of
compilers to the build target. For extracted objects, we need to only
add the ones for the objects we use.
But we did this really inefficiently -- in fact, downright terribly. We
iterated over all source files from the extracted objects, then tried to
look up a new compiler for them. Even though the extracted objects
already had a list of compilers! This broke once compilers were made
per-subproject, because while the extracted objects have a reference to
all the compilers it needs (just like static archives do, actually) we
might not actually be able to look up that compiler from scratch inside
the current subproject.
Fix this by asking the extracted objects to categorize all its own
sources and return the compilers we want.
Fixes#10579
Regression in commit 7c757dff71.
SubprojectHolder is no longer an ObjectHolder and says so via a TODO:
this means that we have to fiddle with held_object. Yay.
The `add_deps` function did not behave correctly when a specified
dependency is not an instance of `dependencies.Dependency`.
Reorder the logic flow to perform this validation first.
Fixes#10468