What happens is this:
- liba is a convenience static library
- libb is an installed static library
- libb links in liba with --link-whole
- libc links to libb
- we generate a link line with libb *and* liba, even though libb is a
strict superset of liba
This is a bug that has existed since the we stopped using link-whole to
combine convenience libraries, and to instead propagate their
dependencies up. For most linkers this is harmless, if inefficient.
However, for apple's ld64 with the addition calling `ranlib -c`, this
ends up causing multiple copies of symbols to clash (I think that other
linkers recognize that these symbols are the same and combine them), and
linking to fail.
The fix is to stop adding libraries to a target's `link_whole_targets`
when we take its objects instead. This is an all around win since it
fixes this bug, shortens linker command lines, and avoids opening
archives that no new symbols will be found in anyway.
There are two distinct cases here that need to be considered. The first
issue is https://github.com/mesonbuild/meson/issues/10723 and
https://github.com/mesonbuild/meson/issues/10724, which means that Meson
can't actually generate link-whole arguments with rust targets. The
second is that rlibs are never valid candidates for link-whole anyway.
The promotion happens to work because of another bug in the promotion
path (which is fixed in the next commit).
Generally plumb through the values of get_option() passed to
install_dir, and use this to establish the install plan name. Fixes
several odd cases, such as:
- {datadir} being prepended to "share" or "include"
- dissociating custom install directories and writing them out as
{prefix}/share/foo or {prefix}/lib/python3.10/site-packages
This is the second half of #9478Fixes#10601
Do to a bug in which a method, rather than an attribute, was checked
we would never detect that a target had no sources, even though we meant
to. As a result we would never actually error out when a target has no
sources. When fixing the check, I've changed the error to a warning
so that existing projects will continue to build without errors, only
warnings.
In order to reliably link to static libraries or individual object
files, we need to take their languages into account as well. For static
libraries this is easy: we just add the static library's list of
compilers to the build target. For extracted objects, we need to only
add the ones for the objects we use.
But we did this really inefficiently -- in fact, downright terribly. We
iterated over all source files from the extracted objects, then tried to
look up a new compiler for them. Even though the extracted objects
already had a list of compilers! This broke once compilers were made
per-subproject, because while the extracted objects have a reference to
all the compilers it needs (just like static archives do, actually) we
might not actually be able to look up that compiler from scratch inside
the current subproject.
Fix this by asking the extracted objects to categorize all its own
sources and return the compilers we want.
Fixes#10579
Regression in commit 7c757dff71.
SubprojectHolder is no longer an ObjectHolder and says so via a TODO:
this means that we have to fiddle with held_object. Yay.
The `add_deps` function did not behave correctly when a specified
dependency is not an instance of `dependencies.Dependency`.
Reorder the logic flow to perform this validation first.
Fixes#10468
Instead of asking the ExtractedObjects, but with a hook back into the backend,
use the existing function in the backend itself. This fixes using the
extract_objects(...) of a generated source file in a custom_target.
It should also fix recursive extract_all_objects with the Xcode backend.
Fixes: #10394
A single target could be picked for unity build, and in that case
extract_objects() should not be allowed.
Likewise for the opposite case, where extract_objects() should be allowed
if unity build is disabled for a single target. A test that covers that
case is added later.
CustomTarget.is_linkable_target has '.dylib'.
See also 93b1d31af9 that added '.dylib'
to CustomTarget.is_linkable_target but didn't add '.dylib' to
CustomTargetIndex.is_linkable_target.
There are a couple issues that combine to make the current handling a
bit confusing.
- we call it "install_dir_name" but it is only ever the class default
- CustomTarget always has it set to None, and then we check if it is
None then create a different variable with a safe fallback. The if is
useless -- it cannot fail, but if it did we'd get an undefined
variable error when we tried to use `dir_name`
Remove the special handling for CustomTarget. Instead, just always
accept None as a possible value of outdir_name when constructing install
data, and, if it is None, fall back to {prefix}/outdir regardless of
what type it used to be.
This reverts commit 5f02d0d9e1.
Which isn't correct, we have very strange behavior of "force on pie/pic
or let the toolchain do whatever it wants, but you can't turn it off."
Calling `isinstance(self, X)` is an anti-pattern, we should just be using
inheritance for this, letting the `StaticLibrary` override the method, and
having the base class always return `False`.
Since they are actually dependencies out the output not the Generator
itself.
This fixes dependency issues in the ninja backend, allowing Meson to
rebuild more accurately. It also does sometimes in the vs backend, but
there are problems in the vs backend I'm not sure how to solve. The
vsbackend is, itself, so fragile looking I don't want to get too
involved with it.
Currently, the code puts a placeholder in for the first output, then
replaces all of the outputs when it generates final value. Instead,
let's only replace the placeholder value.
override_options makes no sense for custom_target as we don't use it for
anything. Also, this was added in commit c3c30d4b06
despite not being allowed in permittedKwargsc3c30d4b0.
For inexplicable reasons, we had a known_kwargs for custom_target that
looped over kwargs and issued a warning, not an error, for unknown
kwargs. It was impossible to ever hit that check to begin with, though,
ever since commit e08d735105 which added
permittedKwargs and obsoleted those manual checks with real errors.
So at one point override_options was specially permitted to be used
without emitting a warning, and then for about half a decade it was an
error, and then based on some dead code it was allowed again for a bit.
But through all this it doesn't do anything and isn't documented.
In commit fb2cdd0fe2 the internal property
was renamed, but one use case of it in raising a MesonException was not
changed to go with it.
This meant that instead of erroring out with:
```
ERROR: GNU symbol visibility arg XXXX not one of: default, internal, hidden, protected, inlineshidden
```
we instead errored out with:
```
AttributeError: 'SharedLibrary' object has no attribute 'symbol_visibility'
```
Fixes#9659
The new get_env() method that returns an EnvironmentVariables object
will be needed in next commit that will pass it to CustomTarget.
This has the side effect to use the proper os specific path separator
instead of hardcoding `:`. It is the obvious right thing to do here, but
has caused issues in the past. Hopefully issues have been fixed in the
meantime. If not, better deal with fallouts than keep doing the wrong
thing forever.
Forcing serialization on when writing out the build rule makes very
little sense. It was always "forced" on because we mandated a couple of
environment variables due to legacy reasons.
Add an attribute to RunTarget to say that a given target doesn't *need*
those environment variables, and let ninja optimize them away and run
the command directly if set.
That method had nothing specific to the backend, it's purely a Target
method. This allows to cache the OptionOverrideProxy object on the
Target instance instead of creating a new one for each option lookup.
There is no need to go through all sources again, we already did that to
populate self.compilers. When cs or java compilers are in the list, then
there must be only one compiler.
The code was also not considering generate sources any way.
Previously subprojects inherited languages already added by main
project, or any previous subproject. This change to have a list of
compilers per interpreters, which means that if a subproject does not
add 'c' language it won't be able to compile .c files any more, even if
main project added the 'c' language.
This delays processing list of compilers until the interpreter adds the
BuildTarget into its list of targets. That way the interpreter can add
missing languages instead of duplicating that logic into BuildTarget for
the cython case.