This fixes regression caused by
3162b901ca
that changes the order in which libraries are put on the link command.
In addition, that commit was wrong because libraries from dependencies
were processed before process_compiler() is called, which that commit
wanted to avoid.
We need a union of the compilers used by the target, and by those used
by all dependencies, not all the compilers in all projects. We do,
however, need to look compilers up from all possible compilers, as a
dependency that is a subproject could use a language not present in the
current project.
This saves on a 1500-line import at startup and may be skipped entirely
if no compiled languages are used. In exchange, we move the
implementation to a new file that is imported instead.
Followup to commit ab20eb5bbc.
To take good decisions we'll need to know if we are a Rust library which
is only know after processing source files and compilers.
Note that is it not the final list of compilers, some can be added in
process_compilers_late(), but those are compilers for which we don't
have source files any way.
Case 1:
- Prog links to static lib A
- A link_whole to static lib B
- B link to static lib C
- Prog dependencies should be A and C but not B which is already
included in A.
Case 2:
- Same as case 1, but with A being installed.
- To be useful, A must also include all objects from C that is not
installed.
- Prog only need to link on A.
This allows changing the crate name with which a library ends up being
available inside the Rust code, similar to cargo's dependency renaming
feature or `extern crate foo as bar` inside Rust code.
The library names are directly mapped to filenames by meson while the
crate name gets spaces/dashes replaced by underscores. This works fine
to a certain degree except that rustc expects a certain filename scheme
for rlibs that matches the crate name.
When using such a library as a dependency of a dependency compilation
will fail with a confusing error message.
See https://github.com/rust-lang/rust/issues/110460
We need to remember its value when reconfiguring, but the Build object
is not reused, only coredata is.
This also makes CLI more consistent by allowing `-Dvsenv=true` syntax.
Fixes: #11309
The current check results in *any* value to `export_dynamic` generating
vala import targets, even `false`. This is pretty clearly wrong, as it
really wants to treat an unset export_dynamic as false.
Generated objects can already be passed in the "objects" keyword argument
as long as you go through an extract_objects() indirection. Allow the
same even directly, since that is more intuitive than having to add them
to "sources".
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Hook this up to installed dependency manifests. This is often needed
above and beyond just an SPDX string -- e.g. many licenses have custom
copyright lines.
At least, if you tried to use it when passing an install_dir. Because
T.Sequence is horrible and we should never use it, and the annotations
are a lie that produces bugs.
So, fix the annotations on CustomTarget to never allow this to happen
again, and also fix the function too. Move some definitions elsewhere
inline to satisfy the linter.
Fixes#11157
There are two problems here: a typing problem, and an algorithm problem.
We expect it to always be passed to CustomTarget() as a list, but we ran
list() on it, which became horribly mangled if you violated the types
and passed a string instead. This caused weird*er* errors and didn't
even do anything. We want to do all validation in the interpreter,
anyway, and make the build level dumb.
Meanwhile we type it as accepting a T.Sequence, which technically
permits... a string, actually. This isn't intentional; the point of
using T.Sequence is out of a misguided idea that APIs are supposed to be
"technically correct" by allowing "anything that fulfills an interface",
which is a flawed concept because we aren't using interfaces here, and
also because "technically string fulfills the same interface as a list,
if we're talking sequences".
Basically:
- mypy is broken by design, because it typechecks "python", not "what we
wish python to be"
- we do not actually need to graciously permit passing tuples instead of
lists
As far as historic implementations of this logic go, we have formerly:
- originally, typeslistified anything
- switched to accepting list from the interpreter, redundantly ran list()
on the list we got, and mishandling API violations passing a string
(commit 11f9638035)
- switched to accepting anything, stringlistifying it if it was not
`None`, mishandling `[None]`, and invoking list(x) on a brand new list
from stringlistify (commit 157d438835)
- stopped stringlistify, just accept T.List[str | None] and re-cast to
list, violates typing because we use/handle plain None too
(commit a8521fef70)
- break typing by declaring we accept a simple string, which still
results in mishandling by converting 'foo' -> ['f', 'o', 'o']
(commit ac576530c4)
All of this. ALL of it. Is because we tried to be fancy and say we
accept T.Tuple; the only version of this logic that has ever worked
correctly is the original untyped do-all-validation-in-the-build-phase
typeslistified version.
Let's just call it what it is. We want a list | None, and we handle it too.
When auto-generating e.g. a `clang-format` target, we first check to see
if the user has already defined one, and if so we don't bother creating
our own. We check for two things:
- if a ninja target already exists, skip
- if a run_target was defined, skip
The second check is *obviously* a duplicate of the first check. But the
first check never actually worked, because all_outputs was only
generated *after* generating all utility rules and actually writing out
the build.ninja file. The check itself compares against nothing, and
always evaluates to false no matter what.
Fix this by reordering the target creation logic so we track outputs
immediately, but only error about them later. Now, we no longer need to
special-case run_target at all, so we can drop that whole logic from
build.py and interpreter.py, and simplify the tracked state.
Fixes defining an `alias_target()` for a utility, which tried to
auto-generate another rule and errored out. Also fixes doing the same
thing with a `custom_target()` although I cannot imagine why anyone
would want to produce an output file named `clang-format` (unless clang
itself decided to migrate to Meson, which would be cool but feels
unlikely).
Which adds the `use-set-for-membership` check. It's generally faster in
python to use a set with the `in` keyword, because it's a hash check
instead of a linear walk, this is especially true with strings, where
it's actually O(n^2), one loop over the container, and an inner loop of
the strings (as string comparison works by checking that `a[n] == b[n]`,
in a loop).
Also, I'm tired of complaining about this in reviews, let the tools do
it for me :)
We have divergent implementations of loading a pickled *.dat file. The
Build class loader has a better error message. But the generic loader
handles TypeError and ModuleNotFoundError. Merge the implementations,
and use it for Build as well.
Fixes#11051
This reverts commit c94c492089.
This broke propagated deps as well as PCH in MSVC, and has not been
fixed in time for the final release of 0.64.0, so it needs to be
reverted and then brought back later.
Fixes#10745Fixes#10975
This introduce a new type of BuildTarget: CompileTarget. From ninja
backend POV it is the same thing as any other build target, except that
it skips the final link step. It could be used in the future for
transpilers too.